Hands-On Neural Networks with Keras
eBook - ePub

Hands-On Neural Networks with Keras

Design and create neural networks using deep learning and artificial intelligence principles

Niloy Purkait

Share book
  1. 462 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

Hands-On Neural Networks with Keras

Design and create neural networks using deep learning and artificial intelligence principles

Niloy Purkait

Book details
Book preview
Table of contents
Citations

About This Book

Your one-stop guide to learning and implementing artificial neural networks with Keras effectively

Key Features

  • Design and create neural network architectures on different domains using Keras
  • Integrate neural network models in your applications using this highly practical guide
  • Get ready for the future of neural networks through transfer learning and predicting multi network models

Book Description

Neural networks are used to solve a wide range of problems in different areas of AI and deep learning.

Hands-On Neural Networks with Keras will start with teaching you about the core concepts of neural networks. You will delve into combining different neural network models and work with real-world use cases, including computer vision, natural language understanding, synthetic data generation, and many more. Moving on, you will become well versed with convolutional neural networks (CNNs), recurrent neural networks (RNNs), long short-term memory (LSTM) networks, autoencoders, and generative adversarial networks (GANs) using real-world training datasets. We will examine how to use CNNs for image recognition, how to use reinforcement learning agents, and many more. We will dive into the specific architectures of various networks and then implement each of them in a hands-on manner using industry-grade frameworks.

By the end of this book, you will be highly familiar with all prominent deep learning models and frameworks, and the options you have when applying deep learning to real-world scenarios and embedding artificial intelligence as the core fabric of your organization.

What you will learn

  • Understand the fundamental nature and workflow of predictive data modeling
  • Explore how different types of visual and linguistic signals are processed by neural networks
  • Dive into the mathematical and statistical ideas behind how networks learn from data
  • Design and implement various neural networks such as CNNs, LSTMs, and GANs
  • Use different architectures to tackle cognitive tasks and embed intelligence in systems
  • Learn how to generate synthetic data and use augmentation strategies to improve your models
  • Stay on top of the latest academic and commercial developments in the field of AI

Who this book is for

This book is for machine learning practitioners, deep learning researchers and AI enthusiasts who are looking to get well versed with different neural network architecture using Keras. Working knowledge of Python programming language is mandatory.

Frequently asked questions

How do I cancel my subscription?
Simply head over to the account section in settings and click on “Cancel Subscription” - it’s as simple as that. After you cancel, your membership will stay active for the remainder of the time you’ve paid for. Learn more here.
Can/how do I download books?
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
What is the difference between the pricing plans?
Both plans give you full access to the library and all of Perlego’s features. The only differences are the price and subscription period: With the annual plan you’ll save around 30% compared to 12 months on the monthly plan.
What is Perlego?
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Do you support text-to-speech?
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Is Hands-On Neural Networks with Keras an online PDF/ePUB?
Yes, you can access Hands-On Neural Networks with Keras by Niloy Purkait in PDF and/or ePUB format, as well as other popular books in Informatique & Médias numériques. We have over one million books available in our catalogue for you to explore.

Information

Year
2019
ISBN
9781789533347

Section 1: Fundamentals of Neural Networks

This section familiarizes the reader with the basics of operating neural networks, how to select appropriate data, normalize features, and execute a data processing pipeline from scratch. Readers will learn how to pair ideal hyperparameters with appropriate activation, loss functions, and optimizers. Once completed, readers will have experienced working with real-world data to architect and test deep learning models on the most prominent frameworks.
This section comprises the following chapters:
  • Chapter 1, Overview of Neural Networks
  • Chapter 2, Deeper Dive into Neural Networks
  • Chapter 3, Signal Processing – Data Analysis with Neural Networks

Overview of Neural Networks

Greetings to you, fellow sentient being; welcome to our exciting journey. The journey itself is to understand the concepts and inner workings behind an elusively powerful computing paradigm: the artificial neural network (ANN). While this notion has been around for almost half a century, the ideas accredited to its birth (such as what an agent is, or how an agent may learn from its surroundings), date back to Aristotelian times, and perhaps even to the dawn of civilization itself. Unfortunately, people in the time of Aristotle were not blessed with the ubiquity of big data, or the speeds of Graphical Processing Unit (GPU)-accelerated and massively parallelized computing, which today open up some very promising avenues for us. We now live in an era where the majority of our species has access to the building blocks and tools required to assemble artificially-intelligent systems. While covering the entire developmental timeline that brings us here today is slightly beyond the scope of this book, we will attempt to briefly summarize some pivotal concepts and ideas that will help us think intuitively about our problem here.
In this chapter, we will cover the following topics:
  • Defining our goal
  • Knowing our tools
  • Understanding neural networks
  • Observing the brain
  • Information modeling and functional representations
  • Some fundamental refreshers in data science

Defining our goal

Essentially, our task here is to conceive a mechanism that is capable of dealing with any data that it is introduced to. In doing so, we want this mechanism to detect any underlying patterns present in our data, in order to leverage it for our own benefit. Succeeding at this task means that we will be able to translate any form of raw data into knowledge, in the form of actionable business insights, burden-alleviating services, or life-saving medicines. Hence, what we actually want is to construct a mechanism that is capable of universally approximating any possible function that could represent our data; the elixir of knowledge, if you will. Do step back and imagine such a world for a moment; a world where the deadliest diseases may be cured in minutes. A world where all are fed, and all may choose to pursue the pinnacle of human achievement in any discipline without fear of persecution, harassment, or poverty. Too much of a promise? Perhaps. Achieving this utopia will take a bit more than designing efficient computer systems. It will require us to evolve our moral perspective in parallel, reconsider our place on this planet as individuals, as a species, and as a whole. But you will be surprised by how much computers can help us get there.
It's important here to understand that it is not just any kind of computer system that we are talking about. This is something very different from what our computing forefathers, such as Babbage and Turing, dealt with. This is not a simple Turing machine or difference engine (although many, if not all, of the concepts we will review in our journey relate directly back to those enlightened minds and their inventions). Hence, our goal will be to cover the pivotal academic contributions, practical experimentation, and implementation insights that followed from centuries, if not decades, of scientific research behind the fundamental concept of generating intelligence; a concept that is arguably most innate to us humans, yet so scarcely understood.

Knowing our tools

We will mainly be working with the two most popular deep learning frameworks that exist, and are freely available to the public at large. This does not mean that we will completely limit our implementations and exercises to these two platforms. It may well occur that we experiment with other prominent deep learning frameworks and backends. We will, however, try to use either TensorFlow or Keras, due to their widespread popularity, large support community, and flexibility in interfacing with other prominent backend and frontend frameworks (such as Theano, Caffe, or Node.js, respectively). We will now provide a little background information on Keras and TensorFlow:

Keras

Many have named Keras the lingua franca of deep learning, due to its user friendliness, modularity, and extendibility. Keras is a high-level application programming interface for neural networks, and focuses on enabling fast experimentation. It is written in Python and is capable of running on top of backends such as TensorFlow or Keras. Keras was initially developed as part of the research effort of the ONEIROS (Open-ended Neuro-Electronic Intelligent Robot Operating System) project. Its name is a reference to the Greek word,
, which literally translates to horn. The word eludes to a play on words dating back to ancient Greek literature, referring to the horn of Amalthea (also known as Cornucopia), an eternal symbol of abundance.
Some functionalities of Keras include the following:
  • Easy and fast prototyping
  • Supports implementation of several of the latest neural network architectures, as well as pretrained models and exercise datasets
  • Executes impeccably on CPUs and GPUs

TensorFlow

TensorFlow is an open source software library for high-performance numerical computation using a data representation known as tensors. It allows people like me and you to implement something called dataflow graphs. A dataflow graph is essentially a structure that describes how data moves through a network, or a series of processing neurons. Every neuron in the network represents a mathematical operation, and each connection (or edge) between neurons is a multidimensional data array, or tensor. In this manner, TensorFlow provides a flexible API that allows easy deployment of computation across a variety of platforms (such as CPUs, GPUs, and their very own Tensor Processing Units (TPUs)), and from desktops, to clusters of servers, to mobile and edge devices. Originally developed by researchers and engineers from the Google Brain team, it provides an excellent programmatic interface that supports neural network design and deep learning.

The fundamentals of neural learning

We begin our journey with an attempt to gain a fundamental understanding of the concept of learning. Moreover, what we are really interested in is how such a rich and complex phenomenon as learning has been implemented on what many call the most advanced computer known to humankind. As we will observe, scientists seem to continuously find inspiration from the inner workings of our own biological neural networks. If nature has indeed figured out a way to leverage loosely connected signals from the outside world and patch them together as a continuous flow of responsive and adaptive awareness (something most humans will concur with), we would indeed like to know exactly what tricks and treats it may have used to do so. Yet, before we can move on to such topics, we must establish a baseline to understand why the notion of neural networks are far different from most modern machine learning (ML) techniques.

What is a neural network?

It is extremely hard to draw a parallel between neural networks and any other existing algorithmic mannerism for problem-solving that we have thus far. Linear regression, for example, simply deals with calculating a line of best fit with respect to the mean of squared errors from plotted observation points. Similarly, centroid clustering just recursively separates data by calculating ideal distances between similar points iteratively until it reaches an asymptotic configuration.
Neural networks, on the other hand, are not that easily explicable, and there are many reasons for this. One way of looking at this is that a neural network is an algorithm that itself is composed of different algorithms, performing smaller local calculations as data propagates through it. This definition of neural networks presented here is, of course, not complete. We will iteratively improve it throughout this book, as we go over more complex notions and neural network architectures. Yet, for now, we may well begin with a layman's definition: a neural network is a mechanism that automatically learns associations between the inputs you feed it (such as images) and the outputs you are interested in (that is, whether an image has a dog, a cat, or an attack helicopter).
So, now we have a rudimentary idea of what a neural network is—a mechanism that takes inputs and learns associations to predict some outputs. This versatile mechanism is, of course, not limited to being fed images only. Indeed, such networks are equally capable of taking inputs such as some text or recorded audio, and guessing whether it is looking at Shakespeare's Hamlet, or listening to Billie Jean, respectively. But how could such a mechanism compensate for the variety of data, in both form and size, while still producing relevant results? To understand this, many academics find it useful to examine how nature can solve this problem. In fact, the millions of years of evolution that occurred on our planet, through genetic mutations and environmental conditions, has produced something quite similar. Better yet, nature has even equipped each of us with a version of this universal function approximator, right between our two ears! We speak, of course, of the human brain.

Observing the brain

Before we briefly delve into this notorious comparison, it is important for us to clarify here that it is indeed just a comparison, and not a parallel. We do not propose that neural networks work exactly in the manner that our brains do, as this would not only anger quite a few neuroscientists, but also does no justice to the engineering marvel represented by the anatomy of the mammalian brain. This comparison, however, helps us to better understand the workflow by which we may design systems that are capable of picking up relevant patterns from data. The versatility of the human brain, be it in making musica...

Table of contents