Mastering Machine Learning Algorithms
eBook - ePub

Mastering Machine Learning Algorithms

Expert techniques to implement popular machine learning algorithms and fine-tune your models

Giuseppe Bonaccorso

Compartir libro
  1. 576 páginas
  2. English
  3. ePUB (apto para móviles)
  4. Disponible en iOS y Android
eBook - ePub

Mastering Machine Learning Algorithms

Expert techniques to implement popular machine learning algorithms and fine-tune your models

Giuseppe Bonaccorso

Detalles del libro
Vista previa del libro
Índice
Citas

Información del libro

Explore and master the most important algorithms for solving complex machine learning problems.

Key Features

  • Discover high-performing machine learning algorithms and understand how they work in depth.
  • One-stop solution to mastering supervised, unsupervised, and semi-supervised machine learning algorithms and their implementation.
  • Master concepts related to algorithm tuning, parameter optimization, and more

Book Description

Machine learning is a subset of AI that aims to make modern-day computer systems smarter and more intelligent. The real power of machine learning resides in its algorithms, which make even the most difficult things capable of being handled by machines. However, with the advancement in the technology and requirements of data, machines will have to be smarter than they are today to meet the overwhelming data needs; mastering these algorithms and using them optimally is the need of the hour.

Mastering Machine Learning Algorithms is your complete guide to quickly getting to grips with popular machine learning algorithms. You will be introduced to the most widely used algorithms in supervised, unsupervised, and semi-supervised machine learning, and will learn how to use them in the best possible manner. Ranging from Bayesian models to the MCMC algorithm to Hidden Markov models, this book will teach you how to extract features from your dataset and perform dimensionality reduction by making use of Python-based libraries such as scikit-learn. You will also learn how to use Keras and TensorFlow to train effective neural networks.

If you are looking for a single resource to study, implement, and solve end-to-end machine learning problems and use-cases, this is the book you need.

What you will learn

  • Explore how a ML model can be trained, optimized, and evaluated
  • Understand how to create and learn static and dynamic probabilistic models
  • Successfully cluster high-dimensional data and evaluate model accuracy
  • Discover how artificial neural networks work and how to train, optimize, and validate them
  • Work with Autoencoders and Generative Adversarial Networks
  • Apply label spreading and propagation to large datasets
  • Explore the most important Reinforcement Learning techniques

Who this book is for

This book is an ideal and relevant source of content for data science professionals who want to delve into complex machine learning algorithms, calibrate models, and improve the predictions of the trained model. A basic knowledge of machine learning is preferred to get the best out of this guide.

Preguntas frecuentes

¿Cómo cancelo mi suscripción?
Simplemente, dirígete a la sección ajustes de la cuenta y haz clic en «Cancelar suscripción». Así de sencillo. Después de cancelar tu suscripción, esta permanecerá activa el tiempo restante que hayas pagado. Obtén más información aquí.
¿Cómo descargo los libros?
Por el momento, todos nuestros libros ePub adaptables a dispositivos móviles se pueden descargar a través de la aplicación. La mayor parte de nuestros PDF también se puede descargar y ya estamos trabajando para que el resto también sea descargable. Obtén más información aquí.
¿En qué se diferencian los planes de precios?
Ambos planes te permiten acceder por completo a la biblioteca y a todas las funciones de Perlego. Las únicas diferencias son el precio y el período de suscripción: con el plan anual ahorrarás en torno a un 30 % en comparación con 12 meses de un plan mensual.
¿Qué es Perlego?
Somos un servicio de suscripción de libros de texto en línea que te permite acceder a toda una biblioteca en línea por menos de lo que cuesta un libro al mes. Con más de un millón de libros sobre más de 1000 categorías, ¡tenemos todo lo que necesitas! Obtén más información aquí.
¿Perlego ofrece la función de texto a voz?
Busca el símbolo de lectura en voz alta en tu próximo libro para ver si puedes escucharlo. La herramienta de lectura en voz alta lee el texto en voz alta por ti, resaltando el texto a medida que se lee. Puedes pausarla, acelerarla y ralentizarla. Obtén más información aquí.
¿Es Mastering Machine Learning Algorithms un PDF/ePUB en línea?
Sí, puedes acceder a Mastering Machine Learning Algorithms de Giuseppe Bonaccorso en formato PDF o ePUB, así como a otros libros populares de Computer Science y Data Modelling & Design. Tenemos más de un millón de libros disponibles en nuestro catálogo para que explores.

Información

Año
2018
ISBN
9781788625906
Edición
1

EM Algorithm and Applications

In this chapter, we are going to introduce a very important algorithmic framework for many statistical learning tasks: the EM algorithm. Contrary to its name, this is not a method to solve a single problem, but a methodology that can be applied in several contexts. Our goal is to explain the rationale and show the mathematical derivation, together with some practical examples. In particular, we are going to discuss the following topics:
  • Maximum Likelihood Estimation (MLE) and Maximum A Posteriori (MAP) learning approaches
  • The EM algorithm with a simple application for the estimation of unknown parameters
  • The Gaussian mixture algorithm, which is one the most famous EM applications
  • Factor analysis
  • Principal Component Analysis (PCA)
  • Independent Component Analysis (ICA)
  • A brief explanation of the Hidden Markov Models (HMMs) forward-backward algorithm considering the EM steps

MLE and MAP learning

Let's suppose we have a data generating process pdata, used to draw a dataset X:
In many statistical learning tasks, our goal is to find the optimal parameter set θ according to a maximization criterion. The most common approach is based on the likelihood and is called MLE. In this case, the optimal set θ is found as follows:
This approach has the advantage of being unbiased by wrong preconditions, but, at the same time, it excludes any possibility of incorporating prior knowledge into the model. It simply looks for the best θ in a wider subspace, so that p(X|θ) is maximized. Even if this approach is almost unbiased, there's a higher probability of finding a sub-optimal solution that can also be quite different from a reasonable (even if not sure) prior. After all, several models are too complex to allow us to define a suitable prior probability (think, for example, of reinforcement learning strategies where there's a huge number of complex states). Therefore, MLE offers the most reliable solution. Moreover, it's possible to prove that the MLE of a parameter θ converges in probability to the real value:
On the other hand, if we consider Bayes' theorem, we can derive the following relation:
The posterior probability, p(θ|X), is obtained using both the likelihood and a prior probability, p(θ), and hence takes into account existing knowledge encoded in p(θ). The choice to maximize p(θ|X) is called the MAP approach and it's often a good alternative to MLE when it's possible to formulate trustworthy priors or, as in the case of Latent Dirichlet Allocation (LDA), where the model is on purpose based on some specific prior assumptions.
Unfortunately, a wrong or incomplete prior distribution can bias the model leading to unacceptable results. For this reason, MLE is often the default choice even when it's possible to formulate reasonable assumptions on the structure of p(θ). To understand the impact of a prior on an estimation, let's consider to have observed n=1000 binomial distributed (θ corresponds to the parameter p) experiments and k=800 had a successful outcome. The likelihood is as follows:
For simplicity, let's compute the log-likelihood:
If we compute the derivative with respect to θ and set it equal to zero, we get the following:
So the MLE for θ is 0.8, which is coherent with the observations (we can say that after observing 1000 experiments with 800 successful outcomes, p(X|Success)=0.8). If we have only the data X, we could say that a success is more likely than a failure because 800 out of 1000 experiments are positive.
However, ...

Índice