Industry 4.0 Interoperability, Analytics, Security, and Case Studies
eBook - ePub

Industry 4.0 Interoperability, Analytics, Security, and Case Studies

G. Rajesh, X. Mercilin Raajini, Hien Dang, G. Rajesh, X. Mercilin Raajini, Hien Dang

Buch teilen
  1. 248 Seiten
  2. English
  3. ePUB (handyfreundlich)
  4. Über iOS und Android verfügbar
eBook - ePub

Industry 4.0 Interoperability, Analytics, Security, and Case Studies

G. Rajesh, X. Mercilin Raajini, Hien Dang, G. Rajesh, X. Mercilin Raajini, Hien Dang

Angaben zum Buch
Buchvorschau
Inhaltsverzeichnis
Quellenangaben

Über dieses Buch

All over the world, vast research is in progress on the domain of Industry 4.0 and related techniques. Industry 4.0 is expected to have a very high impact on labor markets, global value chains, education, health, environment, and many social economic aspects.

Industry 4.0 Interoperability, Analytics, Security, and Case Studies provides a deeper understanding of the drivers and enablers of Industry 4.0. It includes real case studies of various applications related to different fields, such as cyber physical systems (CPS), Internet of Things (IoT), cloud computing, machine learning, virtualization, decentralization, blockchain, fog computing, and many other related areas. Also discussed are interoperability, design, and implementation challenges.

Researchers, academicians, and those working in industry around the globe will find this book of interest.

FEATURES



  • Provides an understanding of the drivers and enablers of Industry 4.0


  • Includes real case studies of various applications for different fields


  • Discusses technologies such as cyber physical systems (CPS), Internet of Things (IoT), cloud computing, machine learning, virtualization, decentralization, blockchain, fog computing, and many other related areas


  • Covers design, implementation challenges, and interoperability


  • Offers detailed knowledge on Industry 4.0 and its underlying technologies, research challenges, solutions, and case studies

Häufig gestellte Fragen

Wie kann ich mein Abo kündigen?
Gehe einfach zum Kontobereich in den Einstellungen und klicke auf „Abo kündigen“ – ganz einfach. Nachdem du gekündigt hast, bleibt deine Mitgliedschaft für den verbleibenden Abozeitraum, den du bereits bezahlt hast, aktiv. Mehr Informationen hier.
(Wie) Kann ich Bücher herunterladen?
Derzeit stehen all unsere auf Mobilgeräte reagierenden ePub-Bücher zum Download über die App zur Verfügung. Die meisten unserer PDFs stehen ebenfalls zum Download bereit; wir arbeiten daran, auch die übrigen PDFs zum Download anzubieten, bei denen dies aktuell noch nicht möglich ist. Weitere Informationen hier.
Welcher Unterschied besteht bei den Preisen zwischen den Aboplänen?
Mit beiden Aboplänen erhältst du vollen Zugang zur Bibliothek und allen Funktionen von Perlego. Die einzigen Unterschiede bestehen im Preis und dem Abozeitraum: Mit dem Jahresabo sparst du auf 12 Monate gerechnet im Vergleich zum Monatsabo rund 30 %.
Was ist Perlego?
Wir sind ein Online-Abodienst für Lehrbücher, bei dem du für weniger als den Preis eines einzelnen Buches pro Monat Zugang zu einer ganzen Online-Bibliothek erhältst. Mit über 1 Million Büchern zu über 1.000 verschiedenen Themen haben wir bestimmt alles, was du brauchst! Weitere Informationen hier.
Unterstützt Perlego Text-zu-Sprache?
Achte auf das Symbol zum Vorlesen in deinem nächsten Buch, um zu sehen, ob du es dir auch anhören kannst. Bei diesem Tool wird dir Text laut vorgelesen, wobei der Text beim Vorlesen auch grafisch hervorgehoben wird. Du kannst das Vorlesen jederzeit anhalten, beschleunigen und verlangsamen. Weitere Informationen hier.
Ist Industry 4.0 Interoperability, Analytics, Security, and Case Studies als Online-PDF/ePub verfügbar?
Ja, du hast Zugang zu Industry 4.0 Interoperability, Analytics, Security, and Case Studies von G. Rajesh, X. Mercilin Raajini, Hien Dang, G. Rajesh, X. Mercilin Raajini, Hien Dang im PDF- und/oder ePub-Format sowie zu anderen beliebten Büchern aus Business & Business Law. Aus unserem Katalog stehen dir über 1 Million Bücher zur Verfügung.

Information

Verlag
CRC Press
Jahr
2021
ISBN
9781000338041
Auflage
1

1 Big Data Analytics and Machine Learning for Industry 4.0: An Overview

Nguyen Tuan Thanh Le and Manh Linh Pham
CONTENTS
1.1 Big Data Analytics for Industry 4.0
1.1.1 Characteristics of Big Data
1.1.2 Characteristics of Big Data Analytics
1.2 Machine Learning for Industry 4.0
1.2.1 Supervised Learning
1.2.2 Unsupervised Learning
1.2.3 Semi-Supervised Learning
1.2.4 Reinforcement Learning
1.2.5 Machine Learning for Big Data
1.3 Deep Learning for Industry 4.0: State of the Art
1.4 Conclusion
Acknowledgments
References

1.1 Big Data Analytics for Industry 4.0

1.1.1 Characteristics of Big Data

The concept of “Big data” was mentioned for the first time by Roger Mougalas in 2005 [1]. It refers to a large scale data, one of the characteristics of Industry 4.0, that cannot be stored in a single computer and is almost impossible to be handled using traditional data analytics approaches. Big data applications exploded after 2011 are related to the improvement in computing power and storage as well as the reduction in the cost of sensors, communication and, recently, the development of the Internet of Things (IoT). These advances have leaded to the utilization of multiple sources (sensors, applications, people, and animals) in the generation of data. In 2011, Big data is defined by [2] using 4Vs characteristics, including: Volume, Velocity, Variety, and also Value. Then the fifth one, Veracity, was introduced in 2012 [3], as shown in Fig. 1.1.
FIGURE 1.1 5Vs Characteristics of Big Data
Volume hints to the size and/or scale of datasets. Until now, there is no universal threshold for data volume to be considered as big data, because of the time and diversity of datasets. Generally, big data can have the volume starting from exabyte (EB) or zettabyte (ZB) [4].
Variety implies the diversity of data in different forms which contain structured, semi-structured, or unstructured ones. Real-world datasets, coming from heterogeneous sources, are mostly under unstructured or semi-structured form that makes the analysis challenging because of the inconsistency, incompleteness, and noise. Therefore, data prepossessing is needed to remove noise, which includes some steps such as data cleaning, data integrating, and data transforming [5].
Velocity indicates the speed of processing data. It can fall into three categories: streaming processing, real-time processing, or batch processing. This characteristic emphasizes that the speed of producing data should keep up with the speed of processing data [4].
Value alludes to the usefulness of data for decision making. Giant companies (e.g., Amazon, Google, Facebook, etc.) analyze daily large scale datasets of users and their behavior to give recommendations, improve location services, or provide targeted advertising, etc. [3].
Veracity denotes the quality and trustworthiness of datasets. Due to the variety characteristic of data, the accuracy and trust become harder to accomplish and they play an essential role in applications of big data analytics (BDA). As with analyzing millions of health care entries in order to respond to an outbreak that impacts on a huge number of people (e.g., the CoVid-19 pandemic) or veterinary records to guess the plague in swine herd (e.g., African swine fever or porcine reproductive and respiratory syndrome), any ambiguities or inconsistencies in datasets can impede the precision of analytic process [3], leading to a catastrophic situation.
Generally, big data in the context of Industry 4.0 can originate from several and various sources, such as: product or machine design data, machine-operation data from control systems, manual-operation records performed by staff, product-quality and process-quality data, manufacturing execution systems, system-monitoring and fault-detection deployments, information on operational costs and manufacturing, logistics information from partners, information from customers on product utilization, feedback, and so on and so forth [6]. Some of these datasets are semi-structured (e.g., manual-operation records), a few are structured (e.g., sensor signals), and others are completely unstructured (e.g., images). Therefore, an enterprise 4.0 requires cutting-edge technologies that can fully take advantage of the valuable manufacturing data, including: machine learning (ML) and BDA.

1.1.2 Characteristics of Big Data Analytics

BDA can be referred to as “the process of analyzing large scale datasets in order to find unknown correlations, hidden patterns, and other valuable information which is not able to be analysed using conventional data analytics” [7], as the conventional data analysis techniques are no longer effective because of the special characteristics of big data: massive, heterogeneous, high dimensional, complex, erroneous, unstructured, noisy, and incomplete [8].
BDA has attracted attention from not only academic but also industrial scientists as the requirement of discovering hidden trends in large scale datasets increases. Reference [9] compared the impact of BDA for Industry 4.0 with the invention of the microscope and telescope for biology and astronomy, respectively. Recently, the considerable development in the ubiquitous IoT (i.e., Internet of Things), sensor networks, and CPS (i.e., cyber-physical systems) have expanded the data-collection process to an enormous scale in numerous domains, including: social media, smart cities, education, health care, finance, agriculture, etc. [3].
Various advanced techniques to analyze data (i.e., ML, computational intelligence, data mining, natural language processing) and potential strategies (i.e., parallelization, divide and conquer, granular computing, incremental learning, instance selection, feature selection, and sampling) can help to handle big data issues. Empowering more efficient processing, and making better decisions can also be obtained by using these techniques and strategies [3].
Divide and conquer helps to reduce the complexity of computing problems. It is composed of three phases: firstly, it reduces the large-complex problem into several smaller, easier ones; secondly, it tries to solve each smaller problem; and finally, it combines the solutions of all the smaller problems to solve the original problem [3].
Parallelization allows one to improve computation time by dividing big problems into smaller instances, distributing smaller tasks across multiple threads and then performing them simultaneously. This strategy decreases computation time instead of total amount of work because multiple tasks can be performed simultaneously rather than sequentially [10].
Incremental learning is widely practiced and used to handle streaming data. It is a learning algorithm and can be trained continuously with additional data rather than current ones. In the learning process, this strategy tunes parameters each time new input data comes in [10].
Granular compu...

Inhaltsverzeichnis