Machine Learning Using TensorFlow Cookbook
eBook - ePub

Machine Learning Using TensorFlow Cookbook

Over 60 recipes on machine learning using deep learning solutions from Kaggle Masters and Google Developer Experts

Alexia Audevart, Konrad Banachewicz, Luca Massaron

Compartir libro
  1. 416 páginas
  2. English
  3. ePUB (apto para móviles)
  4. Disponible en iOS y Android
eBook - ePub

Machine Learning Using TensorFlow Cookbook

Over 60 recipes on machine learning using deep learning solutions from Kaggle Masters and Google Developer Experts

Alexia Audevart, Konrad Banachewicz, Luca Massaron

Detalles del libro
Vista previa del libro
Índice
Citas

Información del libro

Comprehensive recipes to give you valuable insights on Transformers, Reinforcement Learning, and more

Key Features

  • Deep Learning solutions from Kaggle Masters and Google Developer Experts
  • Get to grips with the fundamentals including variables, matrices, and data sources
  • Learn advanced techniques to make your algorithms faster and more accurate

Book Description

The independent recipes in Machine Learning Using TensorFlow Cookbook will teach you how to perform complex data computations and gain valuable insights into your data. Dive into recipes on training models, model evaluation, sentiment analysis, regression analysis, artificial neural networks, and deep learning - each using Google's machine learning library, TensorFlow.

This cookbook covers the fundamentals of the TensorFlow library, including variables, matrices, and various data sources. You'll discover real-world implementations of Keras and TensorFlow and learn how to use estimators to train linear models and boosted trees, both for classification and regression.

Explore the practical applications of a variety of deep learning architectures, such as recurrent neural networks and Transformers, and see how they can be used to solve computer vision and natural language processing (NLP) problems.

With the help of this book, you will be proficient in using TensorFlow, understand deep learning from the basics, and be able to implement machine learning algorithms in real-world scenarios.

What you will learn

  • Take TensorFlow into production
  • Implement and fine-tune Transformer models for various NLP tasks
  • Apply reinforcement learning algorithms using the TF-Agents framework
  • Understand linear regression techniques and use Estimators to train linear models
  • Execute neural networks and improve predictions on tabular data
  • Master convolutional neural networks and recurrent neural networks through practical recipes

Who this book is for

If you are a data scientist or a machine learning engineer, and you want to skip detailed theoretical explanations in favor of building production-ready machine learning models using TensorFlow, this book is for you.

Basic familiarity with Python, linear algebra, statistics, and machine learning is necessary to make the most out of this book.

Preguntas frecuentes

¿Cómo cancelo mi suscripción?
Simplemente, dirígete a la sección ajustes de la cuenta y haz clic en «Cancelar suscripción». Así de sencillo. Después de cancelar tu suscripción, esta permanecerá activa el tiempo restante que hayas pagado. Obtén más información aquí.
¿Cómo descargo los libros?
Por el momento, todos nuestros libros ePub adaptables a dispositivos móviles se pueden descargar a través de la aplicación. La mayor parte de nuestros PDF también se puede descargar y ya estamos trabajando para que el resto también sea descargable. Obtén más información aquí.
¿En qué se diferencian los planes de precios?
Ambos planes te permiten acceder por completo a la biblioteca y a todas las funciones de Perlego. Las únicas diferencias son el precio y el período de suscripción: con el plan anual ahorrarás en torno a un 30 % en comparación con 12 meses de un plan mensual.
¿Qué es Perlego?
Somos un servicio de suscripción de libros de texto en línea que te permite acceder a toda una biblioteca en línea por menos de lo que cuesta un libro al mes. Con más de un millón de libros sobre más de 1000 categorías, ¡tenemos todo lo que necesitas! Obtén más información aquí.
¿Perlego ofrece la función de texto a voz?
Busca el símbolo de lectura en voz alta en tu próximo libro para ver si puedes escucharlo. La herramienta de lectura en voz alta lee el texto en voz alta por ti, resaltando el texto a medida que se lee. Puedes pausarla, acelerarla y ralentizarla. Obtén más información aquí.
¿Es Machine Learning Using TensorFlow Cookbook un PDF/ePUB en línea?
Sí, puedes acceder a Machine Learning Using TensorFlow Cookbook de Alexia Audevart, Konrad Banachewicz, Luca Massaron en formato PDF o ePUB, así como a otros libros populares de Ciencia de la computación y Ciencias computacionales general. Tenemos más de un millón de libros disponibles en nuestro catálogo para que explores.

Información

Año
2021
ISBN
9781800206885

6

Neural Networks

In this chapter, we will introduce neural networks and how to implement them in TensorFlow. Most of the subsequent chapters will be based on neural networks, so learning how to use them in TensorFlow is very important.
Neural networks are currently breaking records in tasks such as image and speech recognition, reading handwriting, understanding text, image segmentation, dialog systems, autonomous car driving, and so much more. While some of these tasks will be covered in later chapters, it is important to introduce neural networks as a general-purpose, easy-to-implement machine learning algorithm, so that we can expand on it later.
The concept of a neural network has been around for decades. However, it only recently gained traction because we now have the computational power to train large networks because of advances in processing power, algorithm efficiency, and data sizes.
A neural network is, fundamentally, a sequence of operations applied to a matrix of input data. These operations are usually collections of additions and multiplications followed by the application of non-linear functions. One example that we have already seen is logistic regression, which we looked at in Chapter 4, Linear Regression. Logistic regression is the sum of partial slope-feature products followed by the application of the sigmoid function, which is non-linear. Neural networks generalize this a little more by allowing any combination of operations and non-linear functions, which includes the application of absolute values, maximums, minimums, and so on.
The most important trick to neural networks is called backpropagation. Backpropagation is a procedure that allows us to update model variables based on the learning rate and the output of the loss function. We used backpropagation to update our model variables in Chapter 3, Keras, and Chapter 4, Linear Regression.
Another important feature to take note of regarding neural networks is the non-linear activation function. Since most neural networks are just combinations of addition and multiplication operations, they will not be able to model non-linear datasets. To address this issue, we will use non-linear activation functions in our neural networks. This will allow the neural network to adapt to most non-linear situations.
It is important to remember that, as we have seen in many of the algorithms covered, neural networks are sensitive to the hyperparameters we choose. In this chapter, we will explore the impact of different learning rates, loss functions, and optimization procedures.
There are a few more resources I would recommend to you for learning about neural networks that cover the topic in greater depth and more detail:
  • The seminal paper describing backpropagation is Efficient Back Prop by Yann LeCun et al. The PDF is located here: http://yann.lecun.com/exdb/publis/pdf/lecun-98b.pdf.
  • CS231, Convolutional Neural Networks for Visual Recognition, by Stanford University. Class resources are available here: http://cs231n.stanford.edu/.
  • CS224d, Deep Learning for Natural Language Processing, by Stanford University. Class resources are available here: http://cs224d.stanford.edu/.
  • Deep Learning, a book by the MIT Press. Goodfellow, et al. 2016. The book is located here: http://www.deeplearningbook.org.
  • The online book Neural Networks and Deep Learning by Michael Nielsen, which is located here: http://neuralnetworksanddeeplearning.com/.
  • For a more pragmatic approach and introduction to neural networks, Andrej Karpathy has written a great summary with JavaScript examples called A Hacker's Guide to Neural Networks. The write-up is located here: http://karpathy.github.io/neuralnets/.
  • Another site that summarizes deep learning well is called Deep Learning for Beginners by Ian Goodfellow, Yoshua Bengio, and Aaron Courville. The web page can be found here: http://randomekek.github.io/deep/deeplearning.html.
We will start by introducing the basic concepts of neural networking before working up to multilayer networks. In the last section, we will create a neural network that will learn how to play Tic-Tac-Toe.
In this chapter, we'll cover the following recipes:
  • Implementing operational gates
  • Working with gates and activation functions
  • Implementing a one-layer neural network
  • Implementing different layers
  • Using a multilayer neural network
  • Improving the predictions of linear models
  • Learning to play Tic-Tac-Toe
The reader can find all of the code from this chapter online at https://github.com/PacktPublishing/Machine-Learning-Using-TensorFlow-Cookbook, and on the Packt repository at https://github.com/PacktPublishing/Machine-Learning-Using-TensorFlow-Cookbook.

Implementing operational gates

One of the most fundamental concepts of neural networks is its functioning as an operational gate. In this section, we will start with a multiplication operation as a gate, before moving on to consider nested gate operations.

Getting ready

The first operational gate we will implement is f(x) = a · x:
To optimize this gate, we declare the a input as a variable and x as the input tensor of our model. This means that TensorFlow will try to change the a value and not the x value. We will create the loss function as the difference between the output and the target value, which is 50.
The second, nested, operational gate will be f(x) = a · x + b:
Again, we will declare a and b as variables and x as the input tensor of our model. We optimize the output toward the target value of 50 again. The interesting thing to note is that the solution for this second example is not unique. There are many combinations of model variables that will allow the output to be 50. With neural networks, we do not care so much about the values of the intermediate model variables, but instead place more emphasis on the desired output.

How to do it...

To implement the first operational gate, f(x) = a · x, in TensorFlow and train the output toward the value of 50, follow the...

Índice