Machine Learning Using TensorFlow Cookbook
eBook - ePub

Machine Learning Using TensorFlow Cookbook

Over 60 recipes on machine learning using deep learning solutions from Kaggle Masters and Google Developer Experts

  1. 416 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

Machine Learning Using TensorFlow Cookbook

Over 60 recipes on machine learning using deep learning solutions from Kaggle Masters and Google Developer Experts

About this book

Comprehensive recipes to give you valuable insights on Transformers, Reinforcement Learning, and more

Key Features

  • Deep Learning solutions from Kaggle Masters and Google Developer Experts
  • Get to grips with the fundamentals including variables, matrices, and data sources
  • Learn advanced techniques to make your algorithms faster and more accurate

Book Description

The independent recipes in Machine Learning Using TensorFlow Cookbook will teach you how to perform complex data computations and gain valuable insights into your data. Dive into recipes on training models, model evaluation, sentiment analysis, regression analysis, artificial neural networks, and deep learning - each using Google's machine learning library, TensorFlow.

This cookbook covers the fundamentals of the TensorFlow library, including variables, matrices, and various data sources. You'll discover real-world implementations of Keras and TensorFlow and learn how to use estimators to train linear models and boosted trees, both for classification and regression.

Explore the practical applications of a variety of deep learning architectures, such as recurrent neural networks and Transformers, and see how they can be used to solve computer vision and natural language processing (NLP) problems.

With the help of this book, you will be proficient in using TensorFlow, understand deep learning from the basics, and be able to implement machine learning algorithms in real-world scenarios.

What you will learn

  • Take TensorFlow into production
  • Implement and fine-tune Transformer models for various NLP tasks
  • Apply reinforcement learning algorithms using the TF-Agents framework
  • Understand linear regression techniques and use Estimators to train linear models
  • Execute neural networks and improve predictions on tabular data
  • Master convolutional neural networks and recurrent neural networks through practical recipes

Who this book is for

If you are a data scientist or a machine learning engineer, and you want to skip detailed theoretical explanations in favor of building production-ready machine learning models using TensorFlow, this book is for you.

Basic familiarity with Python, linear algebra, statistics, and machine learning is necessary to make the most out of this book.

Tools to learn more effectively

Saving Books

Saving Books

Keyword Search

Keyword Search

Annotating Text

Annotating Text

Listen to it instead

Listen to it instead

6

Neural Networks

In this chapter, we will introduce neural networks and how to implement them in TensorFlow. Most of the subsequent chapters will be based on neural networks, so learning how to use them in TensorFlow is very important.
Neural networks are currently breaking records in tasks such as image and speech recognition, reading handwriting, understanding text, image segmentation, dialog systems, autonomous car driving, and so much more. While some of these tasks will be covered in later chapters, it is important to introduce neural networks as a general-purpose, easy-to-implement machine learning algorithm, so that we can expand on it later.
The concept of a neural network has been around for decades. However, it only recently gained traction because we now have the computational power to train large networks because of advances in processing power, algorithm efficiency, and data sizes.
A neural network is, fundamentally, a sequence of operations applied to a matrix of input data. These operations are usually collections of additions and multiplications followed by the application of non-linear functions. One example that we have already seen is logistic regression, which we looked at in Chapter 4, Linear Regression. Logistic regression is the sum of partial slope-feature products followed by the application of the sigmoid function, which is non-linear. Neural networks generalize this a little more by allowing any combination of operations and non-linear functions, which includes the application of absolute values, maximums, minimums, and so on.
The most important trick to neural networks is called backpropagation. Backpropagation is a procedure that allows us to update model variables based on the learning rate and the output of the loss function. We used backpropagation to update our model variables in Chapter 3, Keras, and Chapter 4, Linear Regression.
Another important feature to take note of regarding neural networks is the non-linear activation function. Since most neural networks are just combinations of addition and multiplication operations, they will not be able to model non-linear datasets. To address this issue, we will use non-linear activation functions in our neural networks. This will allow the neural network to adapt to most non-linear situations.
It is important to remember that, as we have seen in many of the algorithms covered, neural networks are sensitive to the hyperparameters we choose. In this chapter, we will explore the impact of different learning rates, loss functions, and optimization procedures.
There are a few more resources I would recommend to you for learning about neural networks that cover the topic in greater depth and more detail:
  • The seminal paper describing backpropagation is Efficient Back Prop by Yann LeCun et al. The PDF is located here: http://yann.lecun.com/exdb/publis/pdf/lecun-98b.pdf.
  • CS231, Convolutional Neural Networks for Visual Recognition, by Stanford University. Class resources are available here: http://cs231n.stanford.edu/.
  • CS224d, Deep Learning for Natural Language Processing, by Stanford University. Class resources are available here: http://cs224d.stanford.edu/.
  • Deep Learning, a book by the MIT Press. Goodfellow, et al. 2016. The book is located here: http://www.deeplearningbook.org.
  • The online book Neural Networks and Deep Learning by Michael Nielsen, which is located here: http://neuralnetworksanddeeplearning.com/.
  • For a more pragmatic approach and introduction to neural networks, Andrej Karpathy has written a great summary with JavaScript examples called A Hacker's Guide to Neural Networks. The write-up is located here: http://karpathy.github.io/neuralnets/.
  • Another site that summarizes deep learning well is called Deep Learning for Beginners by Ian Goodfellow, Yoshua Bengio, and Aaron Courville. The web page can be found here: http://randomekek.github.io/deep/deeplearning.html.
We will start by introducing the basic concepts of neural networking before working up to multilayer networks. In the last section, we will create a neural network that will learn how to play Tic-Tac-Toe.
In this chapter, we'll cover the following recipes:
  • Implementing operational gates
  • Working with gates and activation functions
  • Implementing a one-layer neural network
  • Implementing different layers
  • Using a multilayer neural network
  • Improving the predictions of linear models
  • Learning to play Tic-Tac-Toe
The reader can find all of the code from this chapter online at https://github.com/PacktPublishing/Machine-Learning-Using-TensorFlow-Cookbook, and on the Packt repository at https://github.com/PacktPublishing/Machine-Learning-Using-TensorFlow-Cookbook.

Implementing operational gates

One of the most fundamental concepts of neural networks is its functioning as an operational gate. In this section, we will start with a multiplication operation as a gate, before moving on to consider nested gate operations.

Getting ready

The first operational gate we will implement is f(x) = a · x:
To optimize this gate, we declare the a input as a variable and x as the input tensor of our model. This means that TensorFlow will try to change the a value and not the x value. We will create the loss function as the difference between the output and the target value, which is 50.
The second, nested, operational gate will be f(x) = a · x + b:
Again, we will declare a and b as variables and x as the input tensor of our model. We optimize the output toward the target value of 50 again. The interesting thing to note is that the solution for this second example is not unique. There are many combinations of model variables that will allow the output to be 50. With neural networks, we do not care so much about the values of the intermediate model variables, but instead place more emphasis on the desired output.

How to do it...

To implement the first operational gate, f(x) = a · x, in TensorFlow and train the output toward the value of 50, follow the...

Table of contents

  1. Preface
  2. Getting Started with TensorFlow 2.x
  3. The TensorFlow Way
  4. Keras
  5. Linear Regression
  6. Boosted Trees
  7. Neural Networks
  8. Predicting with Tabular Data
  9. Convolutional Neural Networks
  10. Recurrent Neural Networks
  11. Transformers
  12. Reinforcement Learning with TensorFlow and TF-Agents
  13. Taking TensorFlow to Production
  14. Other Books You May Enjoy
  15. Index

Frequently asked questions

Yes, you can cancel anytime from the Subscription tab in your account settings on the Perlego website. Your subscription will stay active until the end of your current billing period. Learn how to cancel your subscription
No, books cannot be downloaded as external files, such as PDFs, for use outside of Perlego. However, you can download books within the Perlego app for offline reading on mobile or tablet. Learn how to download books offline
Perlego offers two plans: Essential and Complete
  • Essential is ideal for learners and professionals who enjoy exploring a wide range of subjects. Access the Essential Library with 800,000+ trusted titles and best-sellers across business, personal growth, and the humanities. Includes unlimited reading time and Standard Read Aloud voice.
  • Complete: Perfect for advanced learners and researchers needing full, unrestricted access. Unlock 1.4M+ books across hundreds of subjects, including academic and specialized titles. The Complete Plan also includes advanced features like Premium Read Aloud and Research Assistant.
Both plans are available with monthly, semester, or annual billing cycles.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 990+ topics, we’ve got you covered! Learn about our mission
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more about Read Aloud
Yes! You can use the Perlego app on both iOS and Android devices to read anytime, anywhere — even offline. Perfect for commutes or when you’re on the go.
Please note we cannot support devices running on iOS 13 and Android 7 or earlier. Learn more about using the app
Yes, you can access Machine Learning Using TensorFlow Cookbook by Alexia Audevart,Konrad Banachewicz,Luca Massaron in PDF and/or ePUB format, as well as other popular books in Informatique & Intelligence artificielle (IA) et sémantique. We have over one million books available in our catalogue for you to explore.