R Deep Learning Projects
eBook - ePub

R Deep Learning Projects

Yuxi (Hayden) Liu, Pablo Maldonado

Compartir libro
  1. English
  2. ePUB (apto para móviles)
  3. Disponible en iOS y Android
eBook - ePub

R Deep Learning Projects

Yuxi (Hayden) Liu, Pablo Maldonado

Detalles del libro
Vista previa del libro
Índice
Citas

Información del libro

5 real-world projects to help you master deep learning concepts

Key Features

  • Master the different deep learning paradigms and build real-world projects related to text generation, sentiment analysis, fraud detection, and more
  • Get to grips with R's impressive range of Deep Learning libraries and frameworks such as deepnet, MXNetR, Tensorflow, H2O, Keras, and text2vec
  • Practical projects that show you how to implement different neural networks with helpful tips, tricks, and best practices

Book Description

R is a popular programming language used by statisticians and mathematicians for statistical analysis, and is popularly used for deep learning. Deep Learning, as we all know, is one of the trending topics today, and is finding practical applications in a lot of domains.

This book demonstrates end-to-end implementations of five real-world projects on popular topics in deep learning such as handwritten digit recognition, traffic light detection, fraud detection, text generation, and sentiment analysis. You'll learn how to train effective neural networks in R—including convolutional neural networks, recurrent neural networks, and LSTMs—and apply them in practical scenarios. The book also highlights how neural networks can be trained using GPU capabilities. You will use popular R libraries and packages—such as MXNetR, H2O, deepnet, and more—to implement the projects.

By the end of this book, you will have a better understanding of deep learning concepts and techniques and how to use them in a practical setting.

What you will learn

  • Instrument Deep Learning models with packages such as deepnet, MXNetR, Tensorflow, H2O, Keras, and text2vec
  • Apply neural networks to perform handwritten digit recognition using MXNet
  • Get the knack of CNN models, Neural Network API, Keras, and TensorFlow for traffic sign classification -Implement credit card fraud detection with Autoencoders
  • Master reconstructing images using variational autoencoders
  • Wade through sentiment analysis from movie reviews
  • Run from past to future and vice versa with bidirectional Long Short-Term Memory (LSTM) networks
  • Understand the applications of Autoencoder Neural Networks in clustering and dimensionality reduction

Who this book is for

Machine learning professionals and data scientists looking to master deep learning by implementing practical projects in R will find this book a useful resource. A knowledge of R programming and the basic concepts of deep learning is required to get the best out of this book.

Preguntas frecuentes

¿Cómo cancelo mi suscripción?
Simplemente, dirígete a la sección ajustes de la cuenta y haz clic en «Cancelar suscripción». Así de sencillo. Después de cancelar tu suscripción, esta permanecerá activa el tiempo restante que hayas pagado. Obtén más información aquí.
¿Cómo descargo los libros?
Por el momento, todos nuestros libros ePub adaptables a dispositivos móviles se pueden descargar a través de la aplicación. La mayor parte de nuestros PDF también se puede descargar y ya estamos trabajando para que el resto también sea descargable. Obtén más información aquí.
¿En qué se diferencian los planes de precios?
Ambos planes te permiten acceder por completo a la biblioteca y a todas las funciones de Perlego. Las únicas diferencias son el precio y el período de suscripción: con el plan anual ahorrarás en torno a un 30 % en comparación con 12 meses de un plan mensual.
¿Qué es Perlego?
Somos un servicio de suscripción de libros de texto en línea que te permite acceder a toda una biblioteca en línea por menos de lo que cuesta un libro al mes. Con más de un millón de libros sobre más de 1000 categorías, ¡tenemos todo lo que necesitas! Obtén más información aquí.
¿Perlego ofrece la función de texto a voz?
Busca el símbolo de lectura en voz alta en tu próximo libro para ver si puedes escucharlo. La herramienta de lectura en voz alta lee el texto en voz alta por ti, resaltando el texto a medida que se lee. Puedes pausarla, acelerarla y ralentizarla. Obtén más información aquí.
¿Es R Deep Learning Projects un PDF/ePUB en línea?
Sí, puedes acceder a R Deep Learning Projects de Yuxi (Hayden) Liu, Pablo Maldonado en formato PDF o ePUB, así como a otros libros populares de Ciencia de la computación y Ciencias computacionales general. Tenemos más de un millón de libros disponibles en nuestro catálogo para que explores.

Información

Año
2018
ISBN
9781788474559

Text Generation Using Recurrent Neural Networks

In this chapter, we will describe some of the most exciting techniques in modern (at the time of writinglate 2017) machine learning, recurrent neural networks. They are, however, not new; they have been around since the 1980s, but they have become popular due to the numerous records in language-related tasks in recent years.
Why do we need a different type of architecture for text? Consider the following example:
"I live in Prague since 2015"
and
"Since 2015 I live in Prague"
If we would like to teach a traditional feed-forward network such as a perceptron or a multi-layer perceptron to identify the date I moved to Prague, then this network would have to learn separate parameters for each input feature, which in particular implies that it would have to learn grammar to answer this simple question! This is undesirable in many applications. Similar issues motivated machine learning researchers and statisticians in the 1980s to introduce the idea of sharing parameters across different parts of the model. This idea is the secret sauce of recurrent neural networks, our next deep learning architecture.
By design, recurrent neural networks are well-suited for processing sequential data. In general, machine learning applied to sequential data can be roughly divided into four main areas:
  • Sequence prediction: Given
    , predict the next element of the sequence,
  • Sequence classification: Given
    , predict a category or label for it
  • Sequence generation: Given
    , generate a new element of the sequence,
  • Sequence to sequence prediction: Given
    , generate an equivalent sequence,
Applications of sequence prediction include weather forecasting and stock market prediction. For classification, we can think, for example, of sentiment analysis and document classification. Automatic image captioning or text generation are part of the sequence generation family of problems, whereas machine translation might be the most familiar example of sequence to sequence prediction we see in our everyday lives.
Our focus for this chapter is on applications of recurrent neural networks for text generation. Since, as we saw previously, text generation is part of a much larger set of problems, many of our algorithms are portable to other contexts.
Training deep learning models is often time-consuming, and recurrent neural networks are not the exception. Our focus is on the ideas over the data, which we will illustrate with smaller datasets than those that you might encounter later on in the wild. This is for the purpose of clarity: We want to make it easier for you to get started on any standard laptop. Once you grasp the basics, you can spin off your own cluster in your favorite cloud provider.

What is so exciting about recurrent neural networks?

Coming from a mathematics background, in my rather hectic career I have seen many different trends, particularly during the last few years, which all sound very similar to me: "you have a problem? wavelets can save you!", "finite elements are the solution to everything", and similar over-enthusiastic claims.
Of course, each tool has its time and place and, more importantly, an application domain where it excels. I find recurrent neural networks quite interesting for the many features they can achieve:
  • Produce consistent markup text (opening and closing tags, recognizing timestamp-like data)
  • Write Wikipedia articles with references, and create URLs from non-existing addresses, by learning what a URL should look like
  • Create credible-looking scientific papers from LaTeX
All these amazing features are possible without the network having any context information or metadata. In particular, without knowing English, nor what a URL or a bit of LaTeX syntax looks like.
These and even more interesting capabilities of neural networks are superbly described by Andrej Karpathy in The Unreasonable Effectiveness of Recurrent Neural Networks: http://karpathy.github.io/2015/05/21/rnn-effectiveness/.
What makes recurrent neural networks exciting? Instead of a constrained fixed-input size to fixed-output size, we can operate over sequences of vectors instead.
A limitation of many machine learning algorithms, including standard feed-forward neural networks, is that they accept a fixed size vector as input and produce a fixed size vector as output. For instance, if we want to classify text, we receive a corpus of documents from which we create a vocabulary to vectorize each document and the output is a vector with class probabilities. Recurrent neural networks instead allow us to take sequences of vectors as input. So, from a one-to-one correspondence between fixed input size and fixed output size, we have a much richer landscape, one-to-one, one-to-many, many-to-one, many-to-many.
Why is that desirable? Let's look at a few examples:
  • One-to-one: Supervised learning, for instance, text classification
  • One-to-many: Given an input text, generate a summary (a sequence of words with important information)
  • Many-to-one: Sentiment analysis in text
  • Many-to-many: Machine translation
Moreover, as recurrent neural networks maintain an internal state which gets updated according to new information, we can view RNNs as a description of a program. In fact, a paper by Siegelman in 1995 shows that recurrent neural networks are Turing complete, they can simulate arbitrary programs.

But what is a recurrent neural network, really?

How does the network keep track of the previous states? To put it in the context of text generation, think of our training data as a list of character sequences (tokenized words). For each word, from the first character, we will predict the following:
Formally, let's denote a sequen...

Índice