Deep Learning and Parallel Computing Environment for Bioengineering Systems
eBook - ePub

Deep Learning and Parallel Computing Environment for Bioengineering Systems

  1. 280 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

Deep Learning and Parallel Computing Environment for Bioengineering Systems

About this book

Deep Learning and Parallel Computing Environment for Bioengineering Systems delivers a significant forum for the technical advancement of deep learning in parallel computing environment across bio-engineering diversified domains and its applications. Pursuing an interdisciplinary approach, it focuses on methods used to identify and acquire valid, potentially useful knowledge sources. Managing the gathered knowledge and applying it to multiple domains including health care, social networks, mining, recommendation systems, image processing, pattern recognition and predictions using deep learning paradigms is the major strength of this book. This book integrates the core ideas of deep learning and its applications in bio engineering application domains, to be accessible to all scholars and academicians. The proposed techniques and concepts in this book can be extended in future to accommodate changing business organizations' needs as well as practitioners' innovative ideas.- Presents novel, in-depth research contributions from a methodological/application perspective in understanding the fusion of deep machine learning paradigms and their capabilities in solving a diverse range of problems- Illustrates the state-of-the-art and recent developments in the new theories and applications of deep learning approaches applied to parallel computing environment in bioengineering systems- Provides concepts and technologies that are successfully used in the implementation of today's intelligent data-centric critical systems and multi-media Cloud-Big data

Frequently asked questions

Yes, you can cancel anytime from the Subscription tab in your account settings on the Perlego website. Your subscription will stay active until the end of your current billing period. Learn how to cancel your subscription.
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
Perlego offers two plans: Essential and Complete
  • Essential is ideal for learners and professionals who enjoy exploring a wide range of subjects. Access the Essential Library with 800,000+ trusted titles and best-sellers across business, personal growth, and the humanities. Includes unlimited reading time and Standard Read Aloud voice.
  • Complete: Perfect for advanced learners and researchers needing full, unrestricted access. Unlock 1.4M+ books across hundreds of subjects, including academic and specialized titles. The Complete Plan also includes advanced features like Premium Read Aloud and Research Assistant.
Both plans are available with monthly, semester, or annual billing cycles.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes! You can use the Perlego app on both iOS or Android devices to read anytime, anywhere — even offline. Perfect for commutes or when you’re on the go.
Please note we cannot support devices running on iOS 13 and Android 7 or earlier. Learn more about using the app.
Yes, you can access Deep Learning and Parallel Computing Environment for Bioengineering Systems by Arun Kumar Sangaiah in PDF and/or ePUB format, as well as other popular books in Technology & Engineering & Business Intelligence. We have over one million books available in our catalogue for you to explore.
Chapter 1

Parallel Computing, Graphics Processing Unit (GPU) and New Hardware for Deep Learning in Computational Intelligence Research

M. Madiajagan, MS, PhD; S. Sridhar Raj, BTech, MTech

Abstract

Graphics processing unit (GPU) is an electronic circuit which manipulates and modifies the memory for better image output. Deep learning involves huge amounts of matrix multiplications and other operations which can be massively parallelized and thus sped up on GPUs. A single GPU might have thousands of cores while a CPU usually has no more than 12 cores. GPU's practical applicability is affected by two issues: long training time and limited GPU memory, which is greatly influenced as the neural network size grows. In order to overcome these issues, this chapter presents various technologies in distributed parallel processing which improve the training time and optimize the memory, and hardware engine architectures will be explored for data size reduction. The GPUs generally used for deep learning are limited in memory size compared to CPUs, so even the latest Tesla GPU has only 16 GB of memory. Therefore, GPU memory cannot be increased to that extent easily, so networks must be designed to fit within the available memory. This could be a factor limiting progress, overcoming which would be highly beneficiary to the computational intelligence area.

Keywords

Deep learning; Parallelization; Graphics processing unit; Hardware architecture; Memory optimization; Computational intelligence

1.1 Introduction

Machine learning is the competency of software to perform a single or a series of tasks intelligently without being programmed for those activities, which is a part of artificial intelligence (AI). Normally, the software behaves based on the programmer's coded instructions, while machine learning is going one step further by making the software capable of accomplishing intended tasks by using statistical analysis and predictive analytics techniques. In simple words, machine learning helps the software learn by itself and act accordingly.
Let us consider an example, when we like or comment a friend's picture or video on a social media site, the related images and videos are posted earlier and stay displayed. Same with the “people you may know” suggestions in facebook, the system suggests us another user's profile to add as a friend who is somehow related to our existing friend's list. And you wonder how the system knows that? This is called machine learning. The software uses statistical analysis to identify the pattern that you, as a user, are performing, and using the predictive analytics, it populates the related news feed on your social media site.

1.1.1 Machine and Deep Learning

Machine learning algorithms are used to automatically understand and realize the day-to-day problems that people are facing. The number of hidden layers in an artificial neural network reflects in the type of learning. The intent is to gain knowledge by learning through datasets using customized methods. But, in case of big data where the data is huge and complicated, it is difficult to learn and analyze [1].
Deep learning plays a very vital role in resolving this issue of learning and analyzing big data. It learns complex data structures and representations acquired from raw datasets to derive meaningful information. Big data also supports the nature of deep learning algorithms, which requires large amount of training data. Training many parameters in deep learning networks increases the testing accuracy [2].
Some deep learning applications are in natural language processing, video processing, recommendation systems, disease prediction, drug discovery, speech recognition, web content filtering, etc. As the scope for learning algorithms evolves, the applications for deep learning grows drastically.

1.1.2 Graphics Processing Unit (GPU)

Graphics Processing Unit (GPU) is a specialized circuit which can perform high level manipulation and alter the memory. It can perform rendering of 2D and 3D graphics to acquire the final display. In the beginning, the need for a GPU was driven by the world of computer games, and slowly the researchers realized that it has many other applications like movement planning of a robot, image processing, video processing, etc. The general task of the GPUs was just expressing the algorithms in terms of pixels, graphical representations and vectors. NVIDIA and AMD, two giants in GPU manufacturing, changed the perspective of GPUs by introducing a dedicated pipeline for rendering the graphics using multi-core systems. CPU uses vector registers to execute the instruction stream, whereas GPUs use hardware threads which execute a single instruction on different datasets [1].
GPUs, and now TPUs (tensor processing units), reduce the time required for training a machine learning (ML) model. For example, using a CPU approach may take a week to train, a GPU approach on the same problem would take a day, and a TPU approach takes only a few hours. Also, multiple GPUs and TPUs can be used. Multiple CPUs can be used, but network latency and other factors make this approach untenable. As others have noted, GPUs are designed to handle high-dimensional matrices, which is a feature of many ML models. TPUs are designed specifically for ML models and don't include the technology required for image display.

1.1.3 Computational Intelligence

Computational intelligence deals with the automatic adaptation and organizes accordingly with respect to the implementation environment. By possessing the attributes such as knowledge discovery, data abstraction, association and generalization, the system can learn and deal with new situations in the changing environments. Silicon-based computational intelligence comprises hybrids of paradigms such as artificial neural networks, fuzzy systems and evolutionary algorithms, augmented with knowledge elements, which are often designed to mimic one or more aspects of carbon-based biological intelligence [3].

1.1.4 GPU, Deep Learning and Computational Intelligence

GPU is basically based on parallel processing in nature, which helps in improving the execution time of the deep learning algorithms. By imparting the parallel deep learning using GPU, all the computational intelligence research applications which involves images, videos, etc., can be trained at a very fast rate and the enti...

Table of contents

  1. Cover image
  2. Title page
  3. Table of Contents
  4. Copyright
  5. Preface
  6. Foreword
  7. Acknowledgment
  8. Chapter 1: Parallel Computing, Graphics Processing Unit (GPU) and New Hardware for Deep Learning in Computational Intelligence Research
  9. Chapter 2: Big Data Analytics and Deep Learning in Bioinformatics With Hadoop
  10. Chapter 3: Image Fusion Through Deep Convolutional Neural Network
  11. Chapter 4: Medical Imaging With Intelligent Systems: A Review
  12. Chapter 5: Medical Image Analysis With Deep Neural Networks
  13. Chapter 6: Deep Convolutional Neural Network for Image Classification on CUDA Platform
  14. Chapter 7: Efficient Deep Learning Approaches for Health Informatics
  15. Chapter 8: Deep Learning and Semi-Supervised and Transfer Learning Algorithms for Medical Imaging
  16. Chapter 9: Survey on Evaluating the Performance of Machine Learning Algorithms: Past Contributions and Future Roadmap
  17. Chapter 10: Miracle of Deep Learning Using IoT
  18. Chapter 11: Challenges in Storing and Processing Big Data Using Hadoop and Spark
  19. Chapter 12: An Efficient Biography-Based Optimization Algorithm to Solve the Location Routing Problem With Intermediate Depots for Multiple Perishable Products
  20. Chapter 13: Evolutionary Mapping Techniques for Systolic Computing System
  21. Chapter 14: Varied Expression Analysis of Children With ASD Using Multimodal Deep Learning Technique
  22. Chapter 15: Parallel Machine Learning and Deep Learning Approaches for Bioinformatics
  23. Index