Data and Digital Technologies: an overview
Données et technologies digitales : un panorama
Article issu de la 18e conférence EUTIC: Humanisme numérique et durabilité sociale, MSH Bordeaux, 11-13 octobre 2023 organisée dans le cadre de l’Appel à projets MSHBx 2023 EUTIC2023.
Evangelia N. PETRAKI
Department of Economics, National and Kapodistrian University of Athens, Greece
evpetra[AT]econ.uoa.gr
Abstract
Data are the core of any information system and the raw material for the smooth operation of any organization at all levels: at the transactional level of an organization data feed the applications which allow the correct, efficient, fast handling of business processes, while their further analysis with machine learning methods allows the acquisition of knowledge through them which leads to better decision-making at an administrative as well as at a strategic level. The existence of structure in the data allows for their properly organized storage in databases, while the lack of structure in them leads to different kinds of data organization in different types of data warehouses. Different data formats and the need for interoperability create requirements for new standards for data storage, processing, and information retrieval. The term big data is widely used to denote a set of information that is too large or complex to manage, analyze or use with conventional methods, while the need to mine knowledge from it has led to new technologies and tools. Data use, analysis and processing helps the progress of scientific research that gives solutions which are useful for humanity and leads to the need for open data that will be reliable, accurate and available to science and society. This paper attempts to present the above topics with the aim of offering a comprehensive picture of the important role of data in the digital age by approaching them from different perspectives.
Keywords: data, databases, open data, big data, digital technologies.
Résumé
Les données sont au cœur de tout système d’information et constituent la matière première pour le bon fonctionnement de toute organisation à tous les niveaux : au niveau transactionnel d’une organisation, les données alimentent les applications qui permettent le traitement approprié, efficace et rapide des processus d’entreprise, tandis que leur analyse plus poussée à l’aide de méthodes d’apprentissage automatique permet d’acquérir des connaissances qui conduisent à une meilleure prise de décision à un niveau administratif et stratégique. L’existence d’une structure dans les données permet leur stockage de manière correctement organisée dans des bases de données, tandis que l’absence de structure conduit à différents types d’organisation des données dans différents types d’entrepôts de données. Les différents formats de données et le besoin d’interopérabilité créent des exigences pour de nouvelles normes de stockage, de traitement et de recherche d’informations. Le terme « big data » est largement utilisé pour désigner un ensemble d’informations trop volumineuses ou complexes pour être gérées, analysées ou utilisées avec des méthodes conventionnelles, tandis que la nécessité d’en extraire des connaissances a donné naissance à de nouvelles technologies et à de nouveaux outils. L’utilisation, l’analyse et le traitement des données contribuent au progrès de la recherche scientifique qui apporte des solutions utiles à l’humanité, d’où la nécessité de disposer de données ouvertes qui soient fiables, précises et accessibles à la science et à la société. Ce document tente de présenter les sujets susmentionnés dans le but d’offrir une image complète du rôle important des données à l’ère numérique en les abordant sous différentes perspectives.
Mots-clés : données, bases de données, données ouvertes, big data, technologies numériques.
Introduction
The term data is the plural of the Latin word datum and means a given and something that should be taken into consideration. With a more general approach, data are symbolic representations of observations or thoughts about the world (Wilkinson L., 2012). Data denote information in digital format like text, numbers, images, audio, or video. Data are collected, stored, and used by computers or other devices and can also be used to help decision-making (Dictionary.com, Cambridge Dictionary).
Data are the core of every information system and the raw material that is used for processing and smooth functionality of every organization, for management, administration, decision-making etc. Data are used as “feed” to machine learning algorithms which drive the acquisition of knowledge from the data. This knowledge is useful for making predictions and better decisions in every area of human activity.
Data are collected in multiple ways through computers, mobile devices, sensors, special devices, etc. constantly and on a very large scale resulting in a large amount of information being gathered in data warehouses and the need to analyze the data and extract knowledge hidden within it for more effective decision making is imperative. The term big data has come to denote the huge volume and rapid speed at which data are collected from various sources and the changing requirements for the right tools which are able to process and analyze large volumes of heterogeneous data.
In recent years, data science has emerged as a new and important discipline that utilizes and combines knowledge from different scientific areas such as computer science, mathematics, statistics, etc. with the aim of turning available data into value for individuals, organizations, and society (Van der Aalst, 2016). Data science is the study of extracting knowledge from heterogeneous structured or unstructured data that will lead to more reliable predictions and improve decision making process (Dhar, 2013).
Artificial intelligence is about the ability of a machine to reproduce the cognitive functions of a human, such as learning, planning and creativity, and to achieve these goals effectively. Artificial intelligence relies on and requires a large amount of “quality” data, while ethical issues concerning the use of the data result from all the above.
In the sections that follow, an attempt will be made to present the above topics concerning the data from different perspectives.
1. Data characteristics
In the current section, important features of data are briefly presented, regarding their format, structure existence or not, their availability, etc. In recent years, data has been collected everywhere, in all areas of human activity, in private, public, commercial, political, scientific environment, etc. The rapid development of the Internet and cloud services have driven to large and ever-increasing amount of data which are collected and available to be processed and analyzed. The term data quality refers to the extent to which the data meet the requirements set, are fit to be used and meet the needs and requirements of their users (Hassenstein et al., 2022).
1.1. Data quality
Since the beginning of the 21st century, significant technological changes have taken place such as information technology, cloud computing, Internet of Things and social networking and have led to a continuous and rapid increase in the amount of data available. Data attracts the interest of industry, academics, and governments because through data analysis it is possible to better understand customer needs, to improve services, to predict risks and to make better decisions. A key condition for the aforementioned is to base the data analysis on accurate and high-quality data (Cai et al., 2015).
The requirements for data quality may differ due to the diversity of the data and the different ways of using it. Related studies have identified different dimensions in data quality that represent specific characteristics (Batini et al., 2009.). In all approaches defining quality, the different dimensions, and metrics for evaluating data is a critical activity (Cai et al., 2015).
Briefly, the most important data characteristics which give quality to data are (Hassenstein et al., 2022; Batini et al., 2009; Cai et al., 2015):
- Accuracy
- Completeness
- Consistency
- Integrity
- Reliability
- Relevance
- Readability
- Availability
1.2. Data format
Data are collected from multiple sources and may have different formats, e.g., text, numbers, images, sounds, videos etc and might be in emails, documents, pdf files, XML or JSON files etc. The format of the data determines whether further processing of the data is possible and with which technology or algorithm this can be done. In most cases, data needs to be pre-processed and cleaned and then analyzed with the appropriate method to extract patterns and trends.
1.3. Data availability
Data can be private and available only to those belonging to specific organizations or companies, but it can also be open data, accessible to anyone and can be used by everyone for further use for any purpose. The terms of use of the open data are specified in the open data release license.
1.4. Databases and Data structure
The way which we manage data, strongly depends on whether the data are structured or not. Depending on whether the data has structure characteristics or not they are divided into three categories: structured, semi-structured and unstructured.
Databases are used to save data. A database is a collection of well-organized records on publicly available mass storage media. It serves one or more applications in an optimal way and allows common and controlled handling of data input, updating and retrieval (Yannakoudakis, 2009).
A database captures a view of the real world and is created to be used by a specific group of users and to serve specific purposes. Database design is carried out in four consecutive stages. Each stage receives information from the previous one and feeds the next one. These stages are (Yiannakoudakis, 1999): requirement analysis, conceptual design (Figure 3), logical design (Figure 4), and physical design. For each of these stages of database design, the corresponding level of abstraction is also used, i.e., hiding information which is not necessary to be presented at each stage. Thus, the corresponding model is often discussed: the conceptual, the logical and the physical.
The concept “model” is used in many scientific disciplines and can be rendered differently depending on the stage of a database creation. A data model is a collection of conceptual tools used to describe data, relationships, semantics, and data constraints (Silberschatz et al., 1997). It is a set of conceptual tools used to describe the real-world entities captured in the Database and the relationships between them (ibid.).
1.4.1. Structured data
Structured data are aggregations or sets of items described by attributes, and they are organized in such way that they can be easily used by a database or other technology (Batini et al., 2009). The oldest models for logical database design are the hierarchical (Figure 1) and the network (Figure 2).
Figure 1 – Hierarchical database design example
Figure 2 – Network database design example
Relational database model (Codd, 1970) is mainly used to store structured data in tables which are related to each other through foreign keys. The relational model was proposed by Codd in a publication (ibid.) which showed that information stored in large databases can be accessed without the need to know how that information is structured within the database. This approach does not require the database user to be an expert and know details about the structure of the database, while all the “elements” of the Database are available for access and processing. The relational model is based on mathematical foundations but also has its own terminology. A Relational Database consists of relations(tables) that capture each logical entity or sub-entity of the microcosm modeled in the database. Each relation consists of attributes that have a domain. Each different instance of data values of all the attributes of a relation is called a record. In each relation, the attribute or attributes that can uniquely identify a record are called candidate keys, and one of these keys is defined as the primary key of the relation. Relations are connected to each other through relationships which are achieved through foreign keys (Figure 4).
Figure 3 – Relational database design example
Figure 4 – Relational database example
Information retrieval in structured data is done through high-level query languages such as Structured Query Language (SQL) or tools and applications like Query By Example (QBE) (Magnani & Montesi, 2004). SQL was developed to manipulate, organize, and retrieve data records from relational databases. SQL commands consist of English words that have a special meaning and are related to the specific action that the specific command is called to perform, making their learning easier: e.g. the INSERT command is used to insert a record, the DELETE command is used to delete a record, and the UPDATE command is used to modify a record, etc. To write an SQL query, the user or database administrator can easily write the commands, but he must know, in addition to the syntax of the command, the structure of the database, i.e. one must know the name or names of the relations (tables) in which the data sought are located and the names of the fields (attributes), etc. At the same time, one can define simple or complex search criteria by using the logical operators AND, OR and NOT as well as set operations such as: UNION, INTERSECT, EXCEPT etc. SQL allows sorting (ORDER BY) and grouping (GROUP BY) the results, as well as the use of grouping functions such as SUM, COUNT, MIN, MAX, etc.
Another data model appropriate for structured data is the Object-Oriented data model. The Object-Oriented data model covers the weaknesses of the relational model. It provides flexibility in defining and managing different objects and allows for the use of existing structures when creating new ones, but it is more complex compared to the relational model.
1.4.2. Semi-Structured
Semi-structured data usually has some form of organization or structure, but this may not be the same for all data, it may be partial, or the structure may be implied by the data itself. Models and languages for semi-structured data have been the subject of extensive study in recent years due to, on the one hand, the rapid development of the World Wide Web (WWW) and the need, on the other hand, to integrate data from heterogeneous sources (Ludäscher et al., 1998).
The data models used to design the semi-structured data schema have different requirements than those used for the database schema previously outlined (relational, object-oriented, etc.). XML (eXtensible Markup Language) is the basic way of representing semi-structured data. XML uses data tags that describe the data, while it can be used in conjunction with a DTD (Document Type Definition) or XML Schema which define the grammar and the allowed tags of an XML file. DTDs and XML Schema have become a popular way to represent the schema of XML documents and thus a way to model semi-structured data. JSON (JavaScript Object Notation) is also a widely used format for semi-structured and structured data. Finally, in conjunction with XML, appropriate query languages have been developed for submitting and retrieving data from semi-structured databases. A typical example is the Lore language, which was designed for this purpose, a user-friendly language that closely resembles the traditional SQL/OQL query languages (Abitebul et al., 1997). XQuery is another query language developed by the the XML Query working group of the W3C. XQuery is a functional query language for structured or semi-structured data usually modeled in XML, JSON, or other formats.
1.4.3. Unstructured data
Unstructured data is information like text and multimedia that does not have a predefined structure and cannot be stored using traditional database models like the relational model. NoSQL (Not Only SQL) is a class of database technologies for storing and accessing text and other unstructured data using more flexible structures than relational databases. NoSQL databases do not force the data to comply with any pre-defined schema, they allow heterogeneous structures of data (Azad et al., 2020).
1.5. Big Data
Special interest has been given in recent years to Big Data due to its prospect of providing valuable knowledge in various fields. Big data differs from traditional data sets, requiring special management and specialized technologies to store and process them. In the literature, many definitions have been given for big data that reflect its constantly evolving and dynamic nature. The term big data refers to data sets that have great variety, variability, complexity, and enormous size, which conventional technologies and traditional media cannot effectively and efficiently support (Cai & Zhu, 2015).
Big data are described by specific characteristics. In several studies these characteristics are five, while in others more characteristics of big data are mentioned. The five key characteristics of big data are (Hariri et al., 2019):
- Volume: the amount of data produced every second and it is related to the size and scale of each dataset.
- Velocity: refers to the rate at which data are produced, processed, analyzed and visualized.
- Variety: represents the structural heterogeneity of the various types of data.
- Veracity: this attribute describes the completeness, accuracy, and the quality of the data.
- Value: describes the usefulness and the advantages that organizations gain by acquiring knowledge from data and turning that knowledge into actions which were previously not possible.
Multiple technologies are used in the field of big data which are effective in the different stages of the data life cycle such as data collection, data preprocessing and cleaning, storage, data mining, analysis (Tsai Chun-Wei et al., 2015), and data visualization. Technological solutions such as MongoDB (MongoDB), Cassandra (Apache Cassandra), Hadoop (Apache Hadoop), Spark (Apache Spark), etc. offer powerful tools in the field of big data but their presentation is beyond the scope of this paper.
2. Knowledge Discovery from Data
2.1. Data Science
One of the biggest challenges that organizations have had to face in recent years is extracting knowledge and value from the large amount of data that is available in their information systems. Data are constantly generated by events that take place and involve people, devices, applications etc. The term Internet of Events encompasses all these trends and includes multiple event sources connected to the internet that generate data for processing and analysis as shown in Figure 5. The term Internet of Content refers to any knowledge element about a specific domain created by humans like web pages, documents, articles, e-books, YouTube, wikis, blogs etc. The Internet of People is about all the data related to the social interaction of humans with applications like social media, forums, email etc. The Internet of Things is any data derived from physical objects connected to the network and Internet of Locations refers to all geographical and geospatial data (Van der Aalst, 2016).
Figure 5 – Internet of Events (IoE)
Data science is an interdisciplinary field that aims to extract knowledge from heterogeneously formatted data that can be structured or unstructured, big data or not. The extracted knowledge will be useful for creating models to make predictions and help with decision-making. Data science includes data collection, pre-processing, visualization, storage, different types of learning from data and extracting results considering ethical, social, legal, and work aspects (Van der Aalst, W., 2016). The computing data science competences are shown in Figure 6 (Computing Competencies for Undergraduate Data Science Curricula, Draft 2, 12/2019, ACM Data Science Task Force).
Figure 6 – Computing Competences for Data Science
2.2. Data mining
Data mining is a process that includes a set of steps aimed at discovering interesting, unexpected, or valuable structures and patterns in large datasets and combines methods and ideas from different scientific areas (Hand, 2007). The goal of data mining is knowledge extraction from databases and data collections, using clustering, categorization and statistical algorithms, artificial intelligence, machine learning, and automatic or semi-automatic analysis of large volumes of data, to discover patterns that were initially unknown.
Knowledge Data Discovery (KDD) is a path consisting of steps which are the selection of the appropriate data set, data pre-processing, data transformation so that the data have the format required by the appropriate algorithms, data mining, pattern extraction, and finally the evaluation of these patterns which leads to knowledge (Figure 7).
Figure 7 – Knowledge Data Discovery process
2.3. Machine Learning
Machine learning is an evolving branch of computational algorithms, a combination of statistical methods and digital technologies that are designed to emulate analytical thinking by learning from the data. Machine learning techniques have been applied successfully in diverse fields ranging from pattern recognition, computer vision, spacecraft engineering, medical, finance, entertainment etc. (El Naqa, Murphy, 2015).
Machine Learning algorithms are separated into two basic categories: supervised and unsupervised learning. Supervised learning is used to predict the future, by analyzing past data, and to identify the specific factors that a category of objects has, and auto-categorize any new object to the correct category. Unsupervised learning uses algorithms to find interesting patterns in big data sets and discover new knowledge that can be utilized in decision making (Murphy, 2013). Classification and regression are characteristic learning approaches that belong to supervised learning, while clustering belongs to unsupervised learning.
2.4 Artificial Intelligence
The term artificial intelligence is not new, it was originally defined in the 1950s as a simple theory of human intelligence exhibited by machines. In today’s era of rapid technological progress and exponential increases in extremely large data sets, artificial intelligence has moved from simple theory to application. It is used in many aspects of human activity such as the evaluation of large data sets in near real time, autonomous driving, video viewing recommendations influenced by streaming history, shopping recommendations, advertising, fraud detection etc. It often works invisibly in the background of our personal electronic devices (Helm et al., 2020). In the case of artificial intelligence as well, it is the qualitative characteristics of the data that lead to efficient algorithms and behavior with as few failures as possible.
2.5. Internet Of Things
Internet Of Things (IoT) is defined as the network that allows the interconnection of different devices to collect and exchange large volumes of data using sensors, mobile devices, Radio Frequency Identification (RFID) technologies as well as software applications and embedded systems (Azad et al., 2020). Internet of Things can be used in many different aspects of life, in private and public sector. IoT applications can be found in healthcare, agriculture, manufacturing, consumer use, transportation, traffic monitoring, energy saving, smart homes, smart cities, smart pollution control, waste management, etc. The number of connected devices is predicted to increase continuously in the future and will lead to changes to our professional and personal lives.
Conclusions
Data are the core of every digital application and used in every form of human activity. They promote research in all sciences, contribute to the improvement of daily and professional activity and are an integral part of all information systems. This paper attempted to present the important role of data, to identify the characteristics that qualitative data have, to make it clear that the existence of structure in data leads to storage and management by databases while the absence of structure leads to other forms of their organization and storage. The huge amount of data collected from multiple devices and applications and the imperative need to analyze them has led to data science which provides data mining and machine learning solutions and methods, and lead to artificial intelligence applications. In conclusion, it should be mentioned that it is particularly important that all digital technologies developed for data management are powered by data which do not have biases or inequalities but instead have qualitative characteristics such as those presented in the paper.
References
Abitebul S., Quass D., McHung J., Widom J., Wiener J.L. (1997). “The Lorel query language for semistructured data”. International Journal on Digital Libraries April, Volume 1, Issue 1, 68-88.
Azad P., Navimipour N.J., Rahmani A.M. et al. (2020). “The role of structured and unstructured data managing mechanisms in the Internet of things”. Cluster Comput 23, 1185–1198. https://doi.org/10.1007/s10586-019-02986-2
Batini C., Cappiello C., Francalanci Ch. & Maurino A. (2009). “Methodologies for data quality assessment and improvement”. ACM Comput. Surv. 41, 3, Article 16 (July). https://doi.org/10.1145/1541880.1541883
Cai L. & Zhu Y. (2015). “The Challenges of Data Quality and Data Quality Assessment in the Big Data Era”. Data Science Journal 14: 2, 1-10. http://dx.doi.org/10.5334/dsj-2015-002
Codd F.E. (1970). “A relational model of data for large shared data banks”. Commun. ACM, 13(6), 377–387.
Dhar V. (2013). “Data science and prediction”. Communications of the ACM, Volume 56, Issue 12, December 2013, 64–73. https://doi.org/10.1145/2500499
El Naqa I., Murphy M.J. (2015). “What Is Machine Learning?”. In El Naqa I., Li R., Murphy M. (eds). Machine Learning in Radiation Oncology. Springer, Cham. https://doi.org/10.1007/978-3-319-18305-3_1
Fayyad U., Piatetsky-Shapiro G. & and Smyth P. (1996). “From data mining to knowledge discovery: An overview”. In Fayyad U., Piatetsky-Shapiro G., Smyth P. & Uthurusamy R. (eds.). Advances in Knowledge Discovery and Data Mining. Cambridge (MA): MIT Press, 1–34.
Hand D.J. (2007). “Principles of Data Mining”. Drug-Safety 30, 621–622. https://doi.org/10.2165/00002018-200730070-00010
Hariri R.H., Fredericks E.M. & Bowers K.M (2019). “Uncertainty in big data analytics: survey, opportunities, and challenges”. Big Data 6, 44. https://doi.org/10.1186/s40537-019-0206-3
Hassenstein M. J., Vanella P., (2022). “Data Quality—Concepts and Problems”. Encyclopedia 2(1), 498-510; https://doi.org/10.3390/encyclopedia2010032
Helm J.M., Swiergosz A.M., Haeberle H.S. et al. (2020). “Machine Learning and Artificial Intelligence: Definitions, Applications, and Future Directions”. Curr Rev Musculoskelet Med 13, 69–76. https://doi.org/10.1007/s12178-020-09600-8
Ludäscher B., Himmeröder R., Lausen G., May W., Schlepphorst C. (1998). “Managing semistructured data with FLORID: A deductive object-oriented perspective”. Information Systems Volume 23, Issue 8, December 1998, 589–613.
Magnani M., Montesi D. (2004). “A Unified Approach to Structured, Semistructured and Unstructured Data”. Echnical Report UBLCS-2004-9. Department of Computer Science, University of Bologna.
Murphy K.P. (2013). Machine Learning a Probabilistic Perspective. Cambridge (MA): The MIT Press.
Silberschatz A., Korth H.F. & Sudarshan S. (1997). “Database system Concepts”. 6th Edition, McGraw-Hill.
Tsai C.-W., Lai C.-F., Chao H.-C., Vasilakos A. V. (2015). « Big data analytics: a survey”. Journal of Big Data 2, Article number: 21
Van der Aalst W. (2016). “Data Science in Action”. Process Mining. Berlin/Heidelberg: Springer. https://doi.org/10.1007/978-3-662-49851-4_1
Wilkinson L. (2012). “The Grammar of Graphics”. In Gentle J., Härdle W., Mori Y. (eds). Handbook of Computational Statistics. Springer Handbooks of Computational Statistics. Berlin/Heidelberg: Springer. https://doi.org/10.1007/978-3-642-21551-3_13
Yannakoudakis E. J. (1999a). “Database Systems”. Athens: Benou Publications.
Yannakoudakis E. J. (1999b). “Database design and management”. Athens: Benou Publications.
Computing Competencies for Undergraduate Data Science Curricula. Draft 2, 12/2019, ACM Data Science Task Force.
Apache Cassandra. https://cassandra.apache.org/_/index.html (accessed on 24 September 2023).
Apache Hadoop. https://hadoop.apache.org/ (accessed on 24 September 2023).
Apache Spark. https://spark.apache.org/ (accessed on 24 September 2023).
Dicrionary.com. https://www.dictionary.com/ (Online). Data. (accessed on 03 August 2023).
Cambridge Dictionary (Online). https://dictionary.cambridge.org/dictionary/english/data Data. (accessed on 03 August 2023).
MongoDB. mongodb.com, (accessed on 24 September 2023).
OpenEdition vous propose de citer ce billet de la manière suivante :
mshbordeaux (14 octobre 2024). Data and Digital Technologies: an overview. Maison des Sciences Humaines Bordeaux. Consulté le 5 décembre 2024 à l’adresse https://doi.org/10.58079/12hla