Deep Learning with TensorFlow 2 and Keras
eBook - ePub

Deep Learning with TensorFlow 2 and Keras

Regression, ConvNets, GANs, RNNs, NLP, and more with TensorFlow 2 and the Keras API, 2nd Edition

Antonio Gulli, Amita Kapoor, Sujit Pal

Buch teilen
  1. 646 Seiten
  2. English
  3. ePUB (handyfreundlich)
  4. Über iOS und Android verfügbar
eBook - ePub

Deep Learning with TensorFlow 2 and Keras

Regression, ConvNets, GANs, RNNs, NLP, and more with TensorFlow 2 and the Keras API, 2nd Edition

Antonio Gulli, Amita Kapoor, Sujit Pal

Angaben zum Buch
Buchvorschau
Inhaltsverzeichnis
Quellenangaben

Über dieses Buch

Build machine and deep learning systems with the newly released TensorFlow 2 and Keras for the lab, production, and mobile devices

Key Features

  • Introduces and then uses TensorFlow 2 and Keras right from the start
  • Teaches key machine and deep learning techniques
  • Understand the fundamentals of deep learning and machine learning through clear explanations and extensive code samples

Book Description

Deep Learning with TensorFlow 2 and Keras, Second Edition teaches neural networks and deep learning techniques alongside TensorFlow (TF) and Keras. You'll learn how to write deep learning applications in the most powerful, popular, and scalable machine learning stack available.

TensorFlow is the machine learning library of choice for professional applications, while Keras offers a simple and powerful Python API for accessing TensorFlow. TensorFlow 2 provides full Keras integration, making advanced machine learning easier and more convenient than ever before.

This book also introduces neural networks with TensorFlow, runs through the main applications (regression, ConvNets (CNNs), GANs, RNNs, NLP), covers two working example apps, and then dives into TF in production, TF mobile, and using TensorFlow with AutoML.

What you will learn

  • Build machine learning and deep learning systems with TensorFlow 2 and the Keras API
  • Use Regression analysis, the most popular approach to machine learning
  • Understand ConvNets (convolutional neural networks) and how they are essential for deep learning systems such as image classifiers
  • Use GANs (generative adversarial networks) to create new data that fits with existing patterns
  • Discover RNNs (recurrent neural networks) that can process sequences of input intelligently, using one part of a sequence to correctly interpret another
  • Apply deep learning to natural human language and interpret natural language texts to produce an appropriate response
  • Train your models on the cloud and put TF to work in real environments
  • Explore how Google tools can automate simple ML workflows without the need for complex modeling

Who this book is for

This book is for Python developers and data scientists who want to build machine learning and deep learning systems with TensorFlow. This book gives you the theory and practice required to use Keras, TensorFlow 2, and AutoML to build machine learning systems. Some knowledge of machine learning is expected.

Häufig gestellte Fragen

Wie kann ich mein Abo kündigen?
Gehe einfach zum Kontobereich in den Einstellungen und klicke auf „Abo kündigen“ – ganz einfach. Nachdem du gekündigt hast, bleibt deine Mitgliedschaft für den verbleibenden Abozeitraum, den du bereits bezahlt hast, aktiv. Mehr Informationen hier.
(Wie) Kann ich Bücher herunterladen?
Derzeit stehen all unsere auf Mobilgeräte reagierenden ePub-Bücher zum Download über die App zur Verfügung. Die meisten unserer PDFs stehen ebenfalls zum Download bereit; wir arbeiten daran, auch die übrigen PDFs zum Download anzubieten, bei denen dies aktuell noch nicht möglich ist. Weitere Informationen hier.
Welcher Unterschied besteht bei den Preisen zwischen den Aboplänen?
Mit beiden Aboplänen erhältst du vollen Zugang zur Bibliothek und allen Funktionen von Perlego. Die einzigen Unterschiede bestehen im Preis und dem Abozeitraum: Mit dem Jahresabo sparst du auf 12 Monate gerechnet im Vergleich zum Monatsabo rund 30 %.
Was ist Perlego?
Wir sind ein Online-Abodienst für Lehrbücher, bei dem du für weniger als den Preis eines einzelnen Buches pro Monat Zugang zu einer ganzen Online-Bibliothek erhältst. Mit über 1 Million Büchern zu über 1.000 verschiedenen Themen haben wir bestimmt alles, was du brauchst! Weitere Informationen hier.
Unterstützt Perlego Text-zu-Sprache?
Achte auf das Symbol zum Vorlesen in deinem nächsten Buch, um zu sehen, ob du es dir auch anhören kannst. Bei diesem Tool wird dir Text laut vorgelesen, wobei der Text beim Vorlesen auch grafisch hervorgehoben wird. Du kannst das Vorlesen jederzeit anhalten, beschleunigen und verlangsamen. Weitere Informationen hier.
Ist Deep Learning with TensorFlow 2 and Keras als Online-PDF/ePub verfügbar?
Ja, du hast Zugang zu Deep Learning with TensorFlow 2 and Keras von Antonio Gulli, Amita Kapoor, Sujit Pal im PDF- und/oder ePub-Format sowie zu anderen beliebten Büchern aus Computer Science & Natural Language Processing. Aus unserem Katalog stehen dir über 1 Million Bücher zur Verfügung.

Information

Jahr
2019
ISBN
9781838827724

2

TensorFlow 1.x and 2.x

The intent of this chapter is to explain the differences between TensorFlow 1.x and TensorFlow 2.0. We'll start by reviewing the traditional programming paradigm for 1.x and then we'll move on to all the new features and paradigms available in 2.x.

Understanding TensorFlow 1.x

It is generally the tradition that the first program one learns to write in any computer language is "hello world." We maintain the convention in this book! Let's begin with a Hello World program:
import tensorflow as tf message = tf.constant('Welcome to the exciting world of Deep Neural Networks!') with tf.Session() as sess: print(sess.run(message).decode()) 
Let us go in depth into this simple code. The first line imports tensorflow. The second line defines the message using tf.constant. The third line defines the Session() using with, and the fourth runs the session using run(). Note that this tells us that the result is a "byte string." In order to remove string quotes and b (for byte) we use the method decode().

TensorFlow 1.x computational graph program structure

TensorFlow 1.x is unlike other programming languages. We first need to build a blueprint of whatever neural network we want to create. This is accomplished by dividing the program into two separate parts: a definition of a computational graph, and its execution.

Computational graphs

A computational graph is a network of nodes and edges. In this section, all the data to be used – that is, tensor objects (constants, variables, placeholders) – and all the computations to be performed – that is, operation objects – are defined. Each node can have zero or more inputs but only one output. Nodes in the network represent objects (tensors and operations), and edges represent the tensors that flow between operations. The computational graph defines the blueprint of the neural network, but the tensors in it have no "value" associated with them yet.
A placeholder is simply a variable that we will assign data to at a later time. It allows us to create our computational graph, without needing the data.
To build a computational graph, we define all the constants, variables, and operations that we need to perform. In the following sections we describe the structure using a simple example of defining and executing a graph to add two vectors.

Execution of the graph

The execution of the graph is performed using the session object, which encapsulates the environment in which tensor and operation objects are evaluated. This is the place where actual calculations and transfers of information from one layer to another take place. The values of different tensor objects are initialized, accessed, and saved in a session object only. Until this point, the tensor objects were just abstract definitions. Here, they come to life.

Why do we use graphs at all?

There are multiple reasons as to why we use graphs. First of all, they are a natural metaphor for describing (deep) networks. Secondly, graphs can be automatically optimized by removing common sub-expressions, by fusing kernels, and by cutting redundant expressions. Thirdly, graphs can be distributed easily during training, and be deployed to different environments such as CPUs, GPUs, or TPUs, and also the likes of cloud, IoT, mobile, or traditional servers. After all, computational graphs are a common concept if you are familiar with functional programming, seen as compositions of simple primitives (as is common in functional programming). TensorFlow borrowed many concepts from computational graphs, and internally it performs several optimizations on our behalf.

An example to start with

We'll consider a simple example of adding two vectors. The graph we want to build is:
The corresponding code to define the computational graph is:
v_1 = tf.constant([1,2,3,4]) v_2 = tf.constant([2,1,5,3]) v_add = tf.add(v_1,v_2) # You can also write v_1 + v_2 instead 
Next, we execute the graph in the session:
with tf.Session() as sess: print(sess.run(v_add)) 
or
sess = tf.Session() print(sess.run(v_add)) sess.close() 
This results in printing the sum of two vectors:
[3 3 8 7] 
Remember, each session needs to be explicitly closed using close().
The building of a computational graph is very simple – you go on adding the variables and operations and passing them through (flow the tensors). In this way you build your neural network layer by layer. TensorFlow also allows you to use specific devices (CPU/GPU) with different objects of the computational graph using tf.device(). In our example, the computational graph consists of three nodes, v_1 and v_2 representing the two vectors, and v_add, the operation to be performed on them. Now to bring this graph to life we first need to define a session object using tf.Session(). We named our session object sess. Next, we run it using the run method defined in the Session class as:
run (fetches, feed_dict=None, options=None, run_metadata) 
This evaluates the tensor in the fetches parameter. Our example has tensor v_add in fetches. The run me...

Inhaltsverzeichnis