Hands-On Machine Learning on Google Cloud Platform
eBook - ePub

Hands-On Machine Learning on Google Cloud Platform

  1. 500 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

Hands-On Machine Learning on Google Cloud Platform

About this book

Unleash Google's Cloud Platform to build, train and optimize machine learning models

Key Features

  • Get well versed in GCP pre-existing services to build your own smart models
  • A comprehensive guide covering aspects from data processing, analyzing to building and training ML models
  • A practical approach to produce your trained ML models and port them to your mobile for easy access

Book Description

Google Cloud Machine Learning Engine combines the services of Google Cloud Platform with the power and flexibility of TensorFlow. With this book, you will not only learn to build and train different complexities of machine learning models at scale but also host them in the cloud to make predictions.

This book is focused on making the most of the Google Machine Learning Platform for large datasets and complex problems. You will learn from scratch how to create powerful machine learning based applications for a wide variety of problems by leveraging different data services from the Google Cloud Platform. Applications include NLP, Speech to text, Reinforcement learning, Time series, recommender systems, image classification, video content inference and many other. We will implement a wide variety of deep learning use cases and also make extensive use of data related services comprising the Google Cloud Platform ecosystem such as Firebase, Storage APIs, Datalab and so forth. This will enable you to integrate Machine Learning and data processing features into your web and mobile applications.

By the end of this book, you will know the main difficulties that you may encounter and get appropriate strategies to overcome these difficulties and build efficient systems.

What you will learn

  • Use Google Cloud Platform to build data-based applications for dashboards, web, and mobile
  • Create, train and optimize deep learning models for various data science problems on big data
  • Learn how to leverage BigQuery to explore big datasets
  • Use Google's pre-trained TensorFlow models for NLP, image, video and much more
  • Create models and architectures for Time series, Reinforcement Learning, and generative models
  • Create, evaluate, and optimize TensorFlow and Keras models for a wide range of applications

Who this book is for

This book is for data scientists, machine learning developers and AI developers who want to learn Google Cloud Platform services to build machine learning applications. Since the interaction with the Google ML platform is mostly done via the command line, the reader is supposed to have some familiarity with the bash shell and Python scripting. Some understanding of machine learning and data science concepts will be handy

Frequently asked questions

Yes, you can cancel anytime from the Subscription tab in your account settings on the Perlego website. Your subscription will stay active until the end of your current billing period. Learn how to cancel your subscription.
No, books cannot be downloaded as external files, such as PDFs, for use outside of Perlego. However, you can download books within the Perlego app for offline reading on mobile or tablet. Learn more here.
Perlego offers two plans: Essential and Complete
  • Essential is ideal for learners and professionals who enjoy exploring a wide range of subjects. Access the Essential Library with 800,000+ trusted titles and best-sellers across business, personal growth, and the humanities. Includes unlimited reading time and Standard Read Aloud voice.
  • Complete: Perfect for advanced learners and researchers needing full, unrestricted access. Unlock 1.4M+ books across hundreds of subjects, including academic and specialized titles. The Complete Plan also includes advanced features like Premium Read Aloud and Research Assistant.
Both plans are available with monthly, semester, or annual billing cycles.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes! You can use the Perlego app on both iOS or Android devices to read anytime, anywhere — even offline. Perfect for commutes or when you’re on the go.
Please note we cannot support devices running on iOS 13 and Android 7 or earlier. Learn more about using the app.
Yes, you can access Hands-On Machine Learning on Google Cloud Platform by Giuseppe Ciaburro, V Kishore Ayyadevara, Alexis Perrier in PDF and/or ePUB format, as well as other popular books in Computer Science & Artificial Intelligence (AI) & Semantics. We have over one million books available in our catalogue for you to explore.

Generative Neural Networks

In recent times, neural networks have been used as generative models: algorithms able to replicate the distribution of data in input to then be able to generate new values starting from that distribution. Usually, an image dataset is analyzed, and we try to learn the distribution associated with the pixels of the images to produce shapes similar to the original ones. Much work is ongoing to get neural networks to create novels, articles, art, and music.
Artificial intelligence (AI) researchers are interested in generative models because they represent a springboard towards the construction of AI systems able to use raw data from the world and automatically extract knowledge. These models seem to be a way to train computers to understand the concepts without the need for researchers to teach such concepts a priori. It would be a big step forward compared to current systems, which are only able to learn from training data accurately labeled by competent human beings.
In this chapter, we will touch one of the most exciting research avenues on generating models with neural networks. First, we will get an introduction to unsupervised learning algorithms; then an overview of generative models will be proposed. We will also discover the most common generative models and show how to implement a few examples. Finally, we will introduce the reader to the Nsynth dataset and the Google Magenta project.
The topics covered are:
  • Unsupervised learning
  • Generative model introduction
  • Restricted Boltzmann machine
  • Deep Boltzmann machines
  • Autoencoder
  • Variational autoencoder
  • Generative adversarial network
  • Adversarial autoencoder
At the end of the chapter, the reader will learn how to extract the content generated within the neural net with different types of content.

Unsupervised learning

Unsupervised learning is a machine learning technique that, starting from a series of inputs (system experience), is able to reclassify and organize on the basis of common characteristics to try to make predictions on subsequent inputs. Unlike supervised learning, only unlabeled examples are provided to the learner during the learning process, as the classes are not known a priori but must be learned automatically.
The following diagram shows three groups labeled from raw data:
From this diagram, it is possible to notice that the system has identified three groups on the basis of a similarity, which in this case is due to proximity. In general, unsupervised learning tries to identify the internal structure of data to reproduce it.
Typical examples of these algorithms are search engines. These programs, given one or more keywords, are able to create a list of links that lead to pages that the search algorithm considers relevant to the research carried out. The validity of these algorithms depends on the usefulness of the information that they can extract from the database.
Unsupervised learning techniques work by comparing data and looking for similarities or differences. As is known, machine learning algorithms try to imitate the functioning of an animal's nervous system. For this purpose, we can hypothesize that neural processes are guided by mechanisms that optimize the unknown objective they pursue. Each process evolves from an initial situation associated with a stimulus to a terminal in which there is an answer, which is the result of the process itself. It is intuitive that, in this evolution, there is a transfer of information. In fact, the stimulus provides the information necessary to obtain the desired response. Therefore, it is important that this information is transmitted as faithfully as possible until the process is completed. A reasonable criterion for interpreting the processes that take place in the nervous system is, therefore, to consider them as transfers of information with maximum preservation of the same.
Unsupervised learning algorithms are based on these concepts. It is a question of using learning theory techniques to measure the loss of information that has occurred in the transfer. The process under consideration is considered as the transmission of a signal through a noisy channel, using well-known techniques developed in the field of communications. It is possible, however, to follow a different approach based on a geometric representation of the process. In fact, both the stimulus and the response are characterized by an appropriate number of components, which in a space correspond to a point. Thus, the process can be interpreted as a geometric transformation of the input space to the output space. The exit space has a smaller size than the input space, as the stimulus contains the information necessary to activate many simultaneous processes. Compared to only one, it is redundant. This means that there is always a redundancy reduction operation in the transformation under consideration.
In the entry and exit space, typical regions are formed, with which the information is associated. The natural mechanism that controls the transfer of information must therefore identify, in some way, these important regions for the process under consideration, and make sure that they correspond in the transformation. Thus, a data grouping operation is present in the process in question; this operation can be identified with the acquisition of experience. The two previous operations of grouping and reduction of redundancy are typical of optimal signal processing, and there is biological evidence of their existence in the functioning of the nervous system. It is interesting to note that these two operations are automatically achieved in the case of non-supervised learning based on experimental principles, such as competitive learning.

Generative models

A generative model aims to generate all the values of a phenomenon, both those that can be observed (input) and those that can be calculated from the ones observed (target). We try to understand how such a model can succeed in this goal by proposing a first distinction between generative and discriminative models.
Often, in machine learning, we need to predict the value of a target vector y given the value of an input x vector. From a probabilistic perspective, the goal is to find the conditional probability distribution p(y|x).
The conditional probability of an event y with respect to an event x is the probability that y occurs, knowing that x is verified. This probability, indicated by p(y|x), expresses a correction of expectations for y, dictated by the observation of x.
The most common approach to this problem is to represent the conditional distribution using a parametric model, and then determine the parameters using a training set consisting of pairs (xn, yn) that contain both the values ​​of the input variables and the relative vectors of corresponding outputs. The resulting conditional distribution can be used to make predictions of the target (y) for new input values ​​(x). This is known as a discriminatory approach, since the conditional distribution discriminates directly between the different values ​​of y.
As an alternative to this approach, we can look for the joint probability distribution p(x∩ y), and then use this joint distribution to evaluate the conditional proba...

Table of contents

  1. Title Page
  2. Copyright and Credits
  3. Packt Upsell
  4. Contributors
  5. Preface
  6. Introducing the Google Cloud Platform
  7. Google Compute Engine
  8. Google Cloud Storage
  9. Querying Your Data with BigQuery
  10. Transforming Your Data
  11. Essential Machine Learning
  12. Google Machine Learning APIs
  13. Creating ML Applications with Firebase
  14. Neural Networks with TensorFlow and Keras
  15. Evaluating Results with TensorBoard
  16. Optimizing the Model through Hyperparameter Tuning
  17. Preventing Overfitting with Regularization
  18. Beyond Feedforward Networks – CNN and RNN
  19. Time Series with LSTMs
  20. Reinforcement Learning
  21. Generative Neural Networks
  22. Chatbots