Hands-On Transfer Learning with Python
eBook - ePub

Hands-On Transfer Learning with Python

Implement advanced deep learning and neural network models using TensorFlow and Keras

Dipanjan Sarkar, Raghav Bali, Tamoghna Ghosh

Buch teilen
  1. 438 Seiten
  2. English
  3. ePUB (handyfreundlich)
  4. Über iOS und Android verfügbar
eBook - ePub

Hands-On Transfer Learning with Python

Implement advanced deep learning and neural network models using TensorFlow and Keras

Dipanjan Sarkar, Raghav Bali, Tamoghna Ghosh

Angaben zum Buch
Buchvorschau
Inhaltsverzeichnis
Quellenangaben

Über dieses Buch

Deep learning simplified by taking supervised, unsupervised, and reinforcement learning to the next level using the Python ecosystem

Key Features

  • Build deep learning models with transfer learning principles in Python
  • implement transfer learning to solve real-world research problems
  • Perform complex operations such as image captioning neural style transfer

Book Description

Transfer learning is a machine learning (ML) technique where knowledge gained during training a set of problems can be used to solve other similar problems.

The purpose of this book is two-fold; firstly, we focus on detailed coverage of deep learning (DL) and transfer learning, comparing and contrasting the two with easy-to-follow concepts and examples. The second area of focus is real-world examples and research problems using TensorFlow, Keras, and the Python ecosystem with hands-on examples.

The book starts with the key essential concepts of ML and DL, followed by depiction and coverage of important DL architectures such as convolutional neural networks (CNNs), deep neural networks (DNNs), recurrent neural networks (RNNs), long short-term memory (LSTM), and capsule networks. Our focus then shifts to transfer learning concepts, such as model freezing, fine-tuning, pre-trained models including VGG, inception, ResNet, and how these systems perform better than DL models with practical examples. In the concluding chapters, we will focus on a multitude of real-world case studies and problems associated with areas such as computer vision, audio analysis and natural language processing (NLP).

By the end of this book, you will be able to implement both DL and transfer learning principles in your own systems.

What you will learn

  • Set up your own DL environment with graphics processing unit (GPU) and Cloud support
  • Delve into transfer learning principles with ML and DL models
  • Explore various DL architectures, including CNN, LSTM, and capsule networks
  • Learn about data and network representation and loss functions
  • Get to grips with models and strategies in transfer learning
  • Walk through potential challenges in building complex transfer learning models from scratch
  • Explore real-world research problems related to computer vision and audio analysis
  • Understand how transfer learning can be leveraged in NLP

Who this book is for

Hands-On Transfer Learning with Python is for data scientists, machine learning engineers, analysts and developers with an interest in data and applying state-of-the-art transfer learning methodologies to solve tough real-world problems. Basic proficiency in machine learning and Python is required.

Häufig gestellte Fragen

Wie kann ich mein Abo kündigen?
Gehe einfach zum Kontobereich in den Einstellungen und klicke auf „Abo kündigen“ – ganz einfach. Nachdem du gekündigt hast, bleibt deine Mitgliedschaft für den verbleibenden Abozeitraum, den du bereits bezahlt hast, aktiv. Mehr Informationen hier.
(Wie) Kann ich Bücher herunterladen?
Derzeit stehen all unsere auf Mobilgeräte reagierenden ePub-Bücher zum Download über die App zur Verfügung. Die meisten unserer PDFs stehen ebenfalls zum Download bereit; wir arbeiten daran, auch die übrigen PDFs zum Download anzubieten, bei denen dies aktuell noch nicht möglich ist. Weitere Informationen hier.
Welcher Unterschied besteht bei den Preisen zwischen den Aboplänen?
Mit beiden Aboplänen erhältst du vollen Zugang zur Bibliothek und allen Funktionen von Perlego. Die einzigen Unterschiede bestehen im Preis und dem Abozeitraum: Mit dem Jahresabo sparst du auf 12 Monate gerechnet im Vergleich zum Monatsabo rund 30 %.
Was ist Perlego?
Wir sind ein Online-Abodienst für Lehrbücher, bei dem du für weniger als den Preis eines einzelnen Buches pro Monat Zugang zu einer ganzen Online-Bibliothek erhältst. Mit über 1 Million Büchern zu über 1.000 verschiedenen Themen haben wir bestimmt alles, was du brauchst! Weitere Informationen hier.
Unterstützt Perlego Text-zu-Sprache?
Achte auf das Symbol zum Vorlesen in deinem nächsten Buch, um zu sehen, ob du es dir auch anhören kannst. Bei diesem Tool wird dir Text laut vorgelesen, wobei der Text beim Vorlesen auch grafisch hervorgehoben wird. Du kannst das Vorlesen jederzeit anhalten, beschleunigen und verlangsamen. Weitere Informationen hier.
Ist Hands-On Transfer Learning with Python als Online-PDF/ePub verfügbar?
Ja, du hast Zugang zu Hands-On Transfer Learning with Python von Dipanjan Sarkar, Raghav Bali, Tamoghna Ghosh im PDF- und/oder ePub-Format sowie zu anderen beliebten Büchern aus Computer Science & Artificial Intelligence (AI) & Semantics. Aus unserem Katalog stehen dir über 1 Million Bücher zur Verfügung.

Information

Deep Learning Essentials

This chapter provides a whirlwind tour of deep learning essentials, starting from the very basics of what deep learning really means, and then moving on to other essential concepts and terminology around neural networks. The reader will be given an overview of the basic building blocks of neural networks, and how deep neural networks are trained. Concepts surrounding model training, including activation functions, loss functions, backpropagation, and hyperparameter-tuning strategies will be covered. These foundational concepts will be of great help for both beginners and experienced data scientists who are venturing into deep neural network models. Special focus has been given to how to set up a robust cloud-based deep learning environment with GPU support, along with tips for setting up an in-house deep learning environment. This should be very useful for readers looking to build large-scale deep learning models on their own. The following topics will be covered in the chapter:
  • What is deep learning?
  • Deep learning fundamentals
  • Setting up a robust, cloud-based deep learning environment with GPU support
  • Setting up a robust, on-premise deep learning environment with GPU support
  • Neural network basics

What is deep learning?

In machine learning (ML), we try to automatically discover rules for mapping input data to a desired output. In this process, it's very important to create appropriate representations of data. For example, if we want to create an algorithm to classify an email as spam/ham, we need to represent the email data numerically. One simple representation could be a binary vector where each component depicts the presence or absence of a word from a predefined vocabulary of words. Also, these representations are task-dependent, that is, representations may vary according to the final task that we desire our ML algorithm to perform.
In the preceding email example, instead of identifying spam/ham if we want to detect sentiment in the email, a more useful representation of the data could be binary vectors where the predefined vocabulary consists of words with positive or negative polarity. The successful application of most of the ML algorithms, such as random forests and logistic regression, depends on how good the data representation is. How do we get these representations? Typically, these representations are human-crafted features that are designed iteratively by making some intelligent guesses. This step is called feature-engineering, and is one of the crucial steps in most ML algorithms. Support Vector Machines (SVMs), or kernel methods in general, try to create more relevant representations of the data by transforming the hand-crafted representation of data into a higher-dimensional-space representation where solving the ML task using either classification or regression becomes easy. However, SVMs are hard to scale to very large datasets and are not that successful for problems such as image-classification and speech-recognition. Ensemble models, such as random forests and Gradient Boosting Machines (GBMs), create a collection of weak models that are specialized to do a small task well and then combine these weak models in some way to arrive at the final output. They work quite well when we have very large input dimensions, and creating handcrafted features is a very time-consuming step. In summary, all the previously mentioned ML methods work with a shallow representation of data involving the representation of data by a set of handcrafted features followed by some non-linear transformations.
Deep learning is a subfield of ML, where a hierarchical representation of the data is created. Higher levels of the hierarchy are formed by the composition of lower-level representations. More importantly, this hierarchy of representation is learned automatically from data by completely automating the most crucial step in ML, called feature-engineering. Automatically learning features at multiple levels of abstraction allows a system to learn complex representations of the input to the output directly from data, without depending completely on human-crafted features.
A deep learning model is actually a neural network with multiple hidden layers, which can help create layered hierarchical representations of the input data. It is called deep because we end up using multiple hidden layers to get the representations. In the simplest of terms, deep learning can also be called hierarchical feature-engineering (of course, we can do much more, but this is the core principle). One simple example of a deep neural network can be a multilayered perceptron (MLP) with more than one hidden layer. Let's consider the MLP-based face-recognition system in the following figure. The lowest-level features that it learns are some edges and patterns of contrasts. The next layer is then able to use those patterns of local contrast to resemble eyes, noses, and lips. Finally, the top layer uses those facial features to create face templates. The deep network is composing simple features to create features of increasing complexity, as depicted in the following diagram:
Hierarchical feature representation with deep neural nets (source: https://www.rsipvision.com/exploring-deep-learning/)
To understand deep learning, we need to have a clear understanding of the building blocks of neural networks, how these networks are trained, a...

Inhaltsverzeichnis