Industry 4.0 Interoperability, Analytics, Security, and Case Studies
eBook - ePub

Industry 4.0 Interoperability, Analytics, Security, and Case Studies

G. Rajesh, X. Mercilin Raajini, Hien Dang, G. Rajesh, X. Mercilin Raajini, Hien Dang

Share book
  1. 248 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

Industry 4.0 Interoperability, Analytics, Security, and Case Studies

G. Rajesh, X. Mercilin Raajini, Hien Dang, G. Rajesh, X. Mercilin Raajini, Hien Dang

Book details
Book preview
Table of contents
Citations

About This Book

All over the world, vast research is in progress on the domain of Industry 4.0 and related techniques. Industry 4.0 is expected to have a very high impact on labor markets, global value chains, education, health, environment, and many social economic aspects.

Industry 4.0 Interoperability, Analytics, Security, and Case Studies provides a deeper understanding of the drivers and enablers of Industry 4.0. It includes real case studies of various applications related to different fields, such as cyber physical systems (CPS), Internet of Things (IoT), cloud computing, machine learning, virtualization, decentralization, blockchain, fog computing, and many other related areas. Also discussed are interoperability, design, and implementation challenges.

Researchers, academicians, and those working in industry around the globe will find this book of interest.

FEATURES



  • Provides an understanding of the drivers and enablers of Industry 4.0


  • Includes real case studies of various applications for different fields


  • Discusses technologies such as cyber physical systems (CPS), Internet of Things (IoT), cloud computing, machine learning, virtualization, decentralization, blockchain, fog computing, and many other related areas


  • Covers design, implementation challenges, and interoperability


  • Offers detailed knowledge on Industry 4.0 and its underlying technologies, research challenges, solutions, and case studies

Frequently asked questions

How do I cancel my subscription?
Simply head over to the account section in settings and click on “Cancel Subscription” - it’s as simple as that. After you cancel, your membership will stay active for the remainder of the time you’ve paid for. Learn more here.
Can/how do I download books?
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
What is the difference between the pricing plans?
Both plans give you full access to the library and all of Perlego’s features. The only differences are the price and subscription period: With the annual plan you’ll save around 30% compared to 12 months on the monthly plan.
What is Perlego?
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Do you support text-to-speech?
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Is Industry 4.0 Interoperability, Analytics, Security, and Case Studies an online PDF/ePUB?
Yes, you can access Industry 4.0 Interoperability, Analytics, Security, and Case Studies by G. Rajesh, X. Mercilin Raajini, Hien Dang, G. Rajesh, X. Mercilin Raajini, Hien Dang in PDF and/or ePUB format, as well as other popular books in Business & Business Law. We have over one million books available in our catalogue for you to explore.

Information

Publisher
CRC Press
Year
2021
ISBN
9781000338041
Edition
1

1 Big Data Analytics and Machine Learning for Industry 4.0: An Overview

Nguyen Tuan Thanh Le and Manh Linh Pham
CONTENTS
1.1 Big Data Analytics for Industry 4.0
1.1.1 Characteristics of Big Data
1.1.2 Characteristics of Big Data Analytics
1.2 Machine Learning for Industry 4.0
1.2.1 Supervised Learning
1.2.2 Unsupervised Learning
1.2.3 Semi-Supervised Learning
1.2.4 Reinforcement Learning
1.2.5 Machine Learning for Big Data
1.3 Deep Learning for Industry 4.0: State of the Art
1.4 Conclusion
Acknowledgments
References

1.1 Big Data Analytics for Industry 4.0

1.1.1 Characteristics of Big Data

The concept of “Big data” was mentioned for the first time by Roger Mougalas in 2005 [1]. It refers to a large scale data, one of the characteristics of Industry 4.0, that cannot be stored in a single computer and is almost impossible to be handled using traditional data analytics approaches. Big data applications exploded after 2011 are related to the improvement in computing power and storage as well as the reduction in the cost of sensors, communication and, recently, the development of the Internet of Things (IoT). These advances have leaded to the utilization of multiple sources (sensors, applications, people, and animals) in the generation of data. In 2011, Big data is defined by [2] using 4Vs characteristics, including: Volume, Velocity, Variety, and also Value. Then the fifth one, Veracity, was introduced in 2012 [3], as shown in Fig. 1.1.
FIGURE 1.1 5Vs Characteristics of Big Data
Volume hints to the size and/or scale of datasets. Until now, there is no universal threshold for data volume to be considered as big data, because of the time and diversity of datasets. Generally, big data can have the volume starting from exabyte (EB) or zettabyte (ZB) [4].
Variety implies the diversity of data in different forms which contain structured, semi-structured, or unstructured ones. Real-world datasets, coming from heterogeneous sources, are mostly under unstructured or semi-structured form that makes the analysis challenging because of the inconsistency, incompleteness, and noise. Therefore, data prepossessing is needed to remove noise, which includes some steps such as data cleaning, data integrating, and data transforming [5].
Velocity indicates the speed of processing data. It can fall into three categories: streaming processing, real-time processing, or batch processing. This characteristic emphasizes that the speed of producing data should keep up with the speed of processing data [4].
Value alludes to the usefulness of data for decision making. Giant companies (e.g., Amazon, Google, Facebook, etc.) analyze daily large scale datasets of users and their behavior to give recommendations, improve location services, or provide targeted advertising, etc. [3].
Veracity denotes the quality and trustworthiness of datasets. Due to the variety characteristic of data, the accuracy and trust become harder to accomplish and they play an essential role in applications of big data analytics (BDA). As with analyzing millions of health care entries in order to respond to an outbreak that impacts on a huge number of people (e.g., the CoVid-19 pandemic) or veterinary records to guess the plague in swine herd (e.g., African swine fever or porcine reproductive and respiratory syndrome), any ambiguities or inconsistencies in datasets can impede the precision of analytic process [3], leading to a catastrophic situation.
Generally, big data in the context of Industry 4.0 can originate from several and various sources, such as: product or machine design data, machine-operation data from control systems, manual-operation records performed by staff, product-quality and process-quality data, manufacturing execution systems, system-monitoring and fault-detection deployments, information on operational costs and manufacturing, logistics information from partners, information from customers on product utilization, feedback, and so on and so forth [6]. Some of these datasets are semi-structured (e.g., manual-operation records), a few are structured (e.g., sensor signals), and others are completely unstructured (e.g., images). Therefore, an enterprise 4.0 requires cutting-edge technologies that can fully take advantage of the valuable manufacturing data, including: machine learning (ML) and BDA.

1.1.2 Characteristics of Big Data Analytics

BDA can be referred to as “the process of analyzing large scale datasets in order to find unknown correlations, hidden patterns, and other valuable information which is not able to be analysed using conventional data analytics” [7], as the conventional data analysis techniques are no longer effective because of the special characteristics of big data: massive, heterogeneous, high dimensional, complex, erroneous, unstructured, noisy, and incomplete [8].
BDA has attracted attention from not only academic but also industrial scientists as the requirement of discovering hidden trends in large scale datasets increases. Reference [9] compared the impact of BDA for Industry 4.0 with the invention of the microscope and telescope for biology and astronomy, respectively. Recently, the considerable development in the ubiquitous IoT (i.e., Internet of Things), sensor networks, and CPS (i.e., cyber-physical systems) have expanded the data-collection process to an enormous scale in numerous domains, including: social media, smart cities, education, health care, finance, agriculture, etc. [3].
Various advanced techniques to analyze data (i.e., ML, computational intelligence, data mining, natural language processing) and potential strategies (i.e., parallelization, divide and conquer, granular computing, incremental learning, instance selection, feature selection, and sampling) can help to handle big data issues. Empowering more efficient processing, and making better decisions can also be obtained by using these techniques and strategies [3].
Divide and conquer helps to reduce the complexity of computing problems. It is composed of three phases: firstly, it reduces the large-complex problem into several smaller, easier ones; secondly, it tries to solve each smaller problem; and finally, it combines the solutions of all the smaller problems to solve the original problem [3].
Parallelization allows one to improve computation time by dividing big problems into smaller instances, distributing smaller tasks across multiple threads and then performing them simultaneously. This strategy decreases computation time instead of total amount of work because multiple tasks can be performed simultaneously rather than sequentially [10].
Incremental learning is widely practiced and used to handle streaming data. It is a learning algorithm and can be trained continuously with additional data rather than current ones. In the learning process, this strategy tunes parameters each time new input data comes in [10].
Granular compu...

Table of contents