Deep Learning with PyTorch
eBook - ePub

Deep Learning with PyTorch

Vishnu Subramanian

Buch teilen
  1. English
  2. ePUB (handyfreundlich)
  3. Über iOS und Android verfügbar
eBook - ePub

Deep Learning with PyTorch

Vishnu Subramanian

Angaben zum Buch
Buchvorschau
Inhaltsverzeichnis
Quellenangaben

Über dieses Buch

Build neural network models in text, vision and advanced analytics using PyTorchAbout This Book• Learn PyTorch for implementing cutting-edge deep learning algorithms.• Train your neural networks for higher speed and flexibility and learn how to implement them in various scenarios;• Cover various advanced neural network architecture such as ResNet, Inception, DenseNet and more with practical examples;Who This Book Is ForThis book is for machine learning engineers, data analysts, data scientists interested in deep learning and are looking to explore implementing advanced algorithms in PyTorch. Some knowledge of machine learning is helpful but not a mandatory need. Working knowledge of Python programming is expected.What You Will Learn• Use PyTorch for GPU-accelerated tensor computations• Build custom datasets and data loaders for images and test the models using torchvision and torchtext• Build an image classifier by implementing CNN architectures using PyTorch• Build systems that do text classification and language modeling using RNN, LSTM, and GRU• Learn advanced CNN architectures such as ResNet, Inception, Densenet, and learn how to use them for transfer learning• Learn how to mix multiple models for a powerful ensemble model• Generate new images using GAN's and generate artistic images using style transferIn DetailDeep learning powers the most intelligent systems in the world, such as Google Voice, Siri, and Alexa. Advancements in powerful hardware, such as GPUs, software frameworks such as PyTorch, Keras, Tensorflow, and CNTK along with the availability of big data have made it easier to implement solutions to problems in the areas of text, vision, and advanced analytics.This book will get you up and running with one of the most cutting-edge deep learning libraries—PyTorch. PyTorch is grabbing the attention of deep learning researchers and data science professionals due to its accessibility, efficiency and being more native to Python way of development. You'll start off by installing PyTorch, then quickly move on to learn various fundamental blocks that power modern deep learning. You will also learn how to use CNN, RNN, LSTM and other networks to solve real-world problems. This book explains the concepts of various state-of-the-art deep learning architectures, such as ResNet, DenseNet, Inception, and Seq2Seq, without diving deep into the math behind them. You will also learn about GPU computing during the course of the book. You will see how to train a model with PyTorch and dive into complex neural networks such as generative networks for producing text and images. By the end of the book, you'll be able to implement deep learning applications in PyTorch with ease.Style and approachAn end-to-end guide that teaches you all about PyTorch and how to implement it in various scenarios.

Häufig gestellte Fragen

Wie kann ich mein Abo kündigen?
Gehe einfach zum Kontobereich in den Einstellungen und klicke auf „Abo kündigen“ – ganz einfach. Nachdem du gekündigt hast, bleibt deine Mitgliedschaft für den verbleibenden Abozeitraum, den du bereits bezahlt hast, aktiv. Mehr Informationen hier.
(Wie) Kann ich Bücher herunterladen?
Derzeit stehen all unsere auf Mobilgeräte reagierenden ePub-Bücher zum Download über die App zur Verfügung. Die meisten unserer PDFs stehen ebenfalls zum Download bereit; wir arbeiten daran, auch die übrigen PDFs zum Download anzubieten, bei denen dies aktuell noch nicht möglich ist. Weitere Informationen hier.
Welcher Unterschied besteht bei den Preisen zwischen den Aboplänen?
Mit beiden Aboplänen erhältst du vollen Zugang zur Bibliothek und allen Funktionen von Perlego. Die einzigen Unterschiede bestehen im Preis und dem Abozeitraum: Mit dem Jahresabo sparst du auf 12 Monate gerechnet im Vergleich zum Monatsabo rund 30 %.
Was ist Perlego?
Wir sind ein Online-Abodienst für Lehrbücher, bei dem du für weniger als den Preis eines einzelnen Buches pro Monat Zugang zu einer ganzen Online-Bibliothek erhältst. Mit über 1 Million Büchern zu über 1.000 verschiedenen Themen haben wir bestimmt alles, was du brauchst! Weitere Informationen hier.
Unterstützt Perlego Text-zu-Sprache?
Achte auf das Symbol zum Vorlesen in deinem nächsten Buch, um zu sehen, ob du es dir auch anhören kannst. Bei diesem Tool wird dir Text laut vorgelesen, wobei der Text beim Vorlesen auch grafisch hervorgehoben wird. Du kannst das Vorlesen jederzeit anhalten, beschleunigen und verlangsamen. Weitere Informationen hier.
Ist Deep Learning with PyTorch als Online-PDF/ePub verfügbar?
Ja, du hast Zugang zu Deep Learning with PyTorch von Vishnu Subramanian im PDF- und/oder ePub-Format sowie zu anderen beliebten Büchern aus Computer Science & Data Processing. Aus unserem Katalog stehen dir über 1 Million Bücher zur Verfügung.

Information

Jahr
2018
ISBN
9781788626071

Deep Learning with Sequence Data and Text

In the last chapter, we covered how to handle spatial data using Convolution Neural Networks (CNNs) and also built image classifiers. In this chapter, we will cover the following topics:
  • Different representations of text data that are useful for building deep learning models
  • Understanding recurrent neural networks (RNNs) and different implementations of RNNs, such as Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU), which power most of the deep learning models for text and sequential data
  • Using one-dimensional convolutions for sequential data
Some of the applications that can be built using RNNs are:
  • Document classifiers: Identifying the sentiment of a tweet or review, classifying news articles
  • Sequence-to-sequence learning: For tasks such as language translations, converting English to French
  • Time-series forecasting: Predicting the sales of a store given details about previous days' store details

Working with text data

Text is one of the commonly used sequential data types. Text data can be seen as either a sequence of characters or a sequence of words. It is common to see text as a sequence of words for most problems. Deep learning sequential models such as RNN and its variants are able to learn important patterns from text data that can solve problems in areas such as:
  • Natural language understanding
  • Document classification
  • Sentiment classification
These sequential models also act as important building blocks for various systems, such as question and answering (QA) systems.
Though these models are highly useful in building these applications, they do not have an understanding of human language, due to its inherent complexities. These sequential models are able to successfully find useful patterns that are then used for performing different tasks. Applying deep learning to text is a fast-growing field, and a lot of new techniques arrive every month. We will cover the fundamental components that power most of the modern-day deep learning applications.
Deep learning models, like any other machine learning model, do not understand text, so we need to convert text into numerical representation. The process of converting text into numerical representation is called vectorization and can be done in different ways, as outlined here:
  • Convert text into words and represent each word as a vector
  • Convert text into characters and represent each character as a vector
  • Create n-gram of words and represent them as vectors
Text data can be broken down into one of these representations. Each smaller unit of text is called a token, and the process of breaking text into tokens is called tokenization. There are a lot of powerful libraries available in Python that can help us in tokenization. Once we convert the text data into tokens, we then need to map each token to a vector. One-hot encoding and word embedding are the two most popular approaches for mapping tokens to vectors. The following diagram summarizes the steps for converting text into their vector representations:
Let's look in more detail at tokenization, n-gram representation, and vectorization.

Tokenization

Given a sentence, splitting it into either characters or words is called tokenization. There are libraries, such as spaCy, that offer complex solutions to tokenization. Let's use simple Python functions such as split and list to convert the text into tokens.
To demonstrate how tokenization works on characters and words, let's consider a small review of the movie Thor: Ragnarok. We will work with the following text:
The action scenes were top notch in this movie. Thor has never been this epic in the MCU. He does some pretty epic sh*t in this movie and he is definitely not under-powered anymore. Thor in unleashed in this, I love that.

Converting text into characters

The Python list function takes a string and converts it into a list of individual characters. This does the job of converting the text into characters. The following code block shows the code used and the results:
thor_review = "the action scenes were top notch in this movie. Thor has never been this epic in the MCU. He does some pretty epic sh*t in this movie and he is definitely not under-powered anymore. Thor in unleashed in this, I love that."

print(list(thor_review))

The result is as follows:

#Results
['t', 'h', 'e', ' ', 'a', 'c', 't', 'i', 'o', 'n', ' ', 's', 'c', 'e', 'n', 'e', 's', ' ', 'w', 'e', 'r', 'e', ' ', 't', 'o', 'p', ' ', 'n', 'o', 't', 'c', 'h', ' ', 'i', 'n', ' ', 't', 'h', 'i', 's', ' ', 'm', 'o', 'v', 'i', 'e', '.', ' ', 'T', 'h', 'o', 'r', ' ', 'h', 'a', 's', ' ', 'n', 'e', 'v', 'e', 'r', ' ', 'b', 'e', 'e', 'n', ' ', 't', 'h', 'i', 's', ' ', 'e', 'p', 'i', 'c', ' ', 'i', 'n', ' ', 't', 'h', 'e', ' ', 'M', 'C', 'U', '.', ' ', 'H', 'e', ' ', 'd', 'o', 'e', 's', ' ', 's', 'o', 'm', 'e', ' ', 'p', 'r', 'e', 't', 't', 'y', ' ', 'e', 'p', 'i', 'c', ' ', 's', 'h', '*', 't', ' ', 'i', 'n', ' ', 't', 'h', 'i', 's', ' ', 'm', 'o', 'v', 'i', 'e', ' ', 'a', 'n', 'd', ' ', 'h', 'e', ' ', 'i', 's', ' ', 'd', 'e', 'f', 'i', 'n', 'i', 't', 'e', 'l', 'y', ' ', 'n', 'o', 't', ' ', 'u', 'n', 'd', 'e', 'r', '-', 'p', 'o', 'w', 'e', 'r', 'e', 'd', ' ', 'a', 'n', 'y', 'm', 'o', 'r', 'e', '.', ' ', 'T', 'h', 'o', 'r', ' ', 'i', 'n', ' ', 'u', 'n', 'l', 'e', 'a', 's', 'h', 'e', 'd', ' ', 'i', 'n', ' ', 't', 'h', 'i', 's', ',', ' ', 'I', ' ', 'l', 'o', 'v', 'e', ' ', 't', 'h', 'a', 't', '.']
This result shows how our simple Python function has converted text into tokens.

Converting text into words

We will use the split function available in the Python string object to break the text into words. The split function takes an argument, based on which it splits the text into tokens. For our example, we will use spaces as the delimiters. The following code block demonstrates how we can convert text into words using the Python split function:
print(Thor_review.split())

#Results

['the', 'action', 'scenes', 'were', 'top', 'notch', 'in', 'this', 'movie.', 'Thor', 'has', 'never', 'been', 'this', 'epic', 'in', 'the', 'MCU.', 'He', 'does', 'some', 'pretty', 'epic', 'sh*t', 'in', 'this', 'movie', 'and', 'he', 'is', 'definitely', 'not', 'under-powered', 'anymore.', 'Thor', 'in', 'unleashed', 'in', 'this,', 'I', 'love', 'that.']
In the preceding code, we did not use any separator; by default, the split function splits on white spaces.

N-gram representation

We have seen how text can be represented as characters and words. Sometimes it is useful to look at two, three, or more words together. N-grams are groups of words extracted from given text. In an n-gram, n represents the number of words that can be used together. Let's look at an example of what a bigram (n=2) looks like. We used the Python nltk package to generate a bigram for thor_review. The following code block shows the result of the bigram and the code used to generate it:
from nltk import ngrams

print(list(ngrams(thor_review.split(),2)))

#Results
[('the', 'action'), ('action', 'scenes'), ('scenes', 'were'), ('were', 'top'), ('top', 'notch'), ('notch', 'in'), ('in', 'this'), ('this', 'movie.'), ('movie.', 'Thor'), ('Thor', 'has'), ('has', 'never'), ('never', 'been'), ('been', 'this'), ('this', 'epic'), ('epic', 'in'), ('in', 'the'), ('the', 'MCU.'), ('MCU.', 'He'), ('He', 'does'), ('does', 'some'), ('some', 'pretty'), ('pretty', 'epic'), ('e...

Inhaltsverzeichnis