Mastering Machine Learning Algorithms
eBook - ePub

Mastering Machine Learning Algorithms

Expert techniques for implementing popular machine learning algorithms, fine-tuning your models, and understanding how they work, 2nd Edition

  1. 798 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

Mastering Machine Learning Algorithms

Expert techniques for implementing popular machine learning algorithms, fine-tuning your models, and understanding how they work, 2nd Edition

About this book

Updated and revised second edition of the bestselling guide to exploring and mastering the most important algorithms for solving complex machine learning problems

Key Features

  • Updated to include new algorithms and techniques
  • Code updated to Python 3.8 & TensorFlow 2.x
  • New coverage of regression analysis, time series analysis, deep learning models, and cutting-edge applications

Book Description

Mastering Machine Learning Algorithms, Second Edition helps you harness the real power of machine learning algorithms in order to implement smarter ways of meeting today's overwhelming data needs. This newly updated and revised guide will help you master algorithms used widely in semi-supervised learning, reinforcement learning, supervised learning, and unsupervised learning domains.

You will use all the modern libraries from the Python ecosystem โ€“ including NumPy and Keras โ€“ to extract features from varied complexities of data. Ranging from Bayesian models to the Markov chain Monte Carlo algorithm to Hidden Markov models, this machine learning book teaches you how to extract features from your dataset, perform complex dimensionality reduction, and train supervised and semi-supervised models by making use of Python-based libraries such as scikit-learn. You will also discover practical applications for complex techniques such as maximum likelihood estimation, Hebbian learning, and ensemble learning, and how to use TensorFlow 2.x to train effective deep neural networks.

By the end of this book, you will be ready to implement and solve end-to-end machine learning problems and use case scenarios.

What you will learn

  • Understand the characteristics of a machine learning algorithm
  • Implement algorithms from supervised, semi-supervised, unsupervised, and RL domains
  • Learn how regression works in time-series analysis and risk prediction
  • Create, model, and train complex probabilistic models
  • Cluster high-dimensional data and evaluate model accuracy
  • Discover how artificial neural networks work โ€“ train, optimize, and validate them
  • Work with autoencoders, Hebbian networks, and GANs

Who this book is for

This book is for data science professionals who want to delve into complex ML algorithms to understand how various machine learning models can be built. Knowledge of Python programming is required.

Frequently asked questions

Yes, you can cancel anytime from the Subscription tab in your account settings on the Perlego website. Your subscription will stay active until the end of your current billing period. Learn how to cancel your subscription.
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
Perlego offers two plans: Essential and Complete
  • Essential is ideal for learners and professionals who enjoy exploring a wide range of subjects. Access the Essential Library with 800,000+ trusted titles and best-sellers across business, personal growth, and the humanities. Includes unlimited reading time and Standard Read Aloud voice.
  • Complete: Perfect for advanced learners and researchers needing full, unrestricted access. Unlock 1.4M+ books across hundreds of subjects, including academic and specialized titles. The Complete Plan also includes advanced features like Premium Read Aloud and Research Assistant.
Both plans are available with monthly, semester, or annual billing cycles.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, weโ€™ve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes! You can use the Perlego app on both iOS or Android devices to read anytime, anywhere โ€” even offline. Perfect for commutes or when youโ€™re on the go.
Please note we cannot support devices running on iOS 13 and Android 7 or earlier. Learn more about using the app.
Yes, you can access Mastering Machine Learning Algorithms by Giuseppe Bonaccorso in PDF and/or ePUB format, as well as other popular books in Computer Science & Computer Science General. We have over one million books available in our catalogue for you to explore.

11

Bayesian Networks and Hidden Markov Models

In this chapter, we're going to introduce the basic concepts of Bayesian models, which allow us to work with several scenarios where it's necessary to consider uncertainty as a structural part of the system. The discussion will focus on static (time-invariant) and dynamic methods that can be employed, where necessary, to model time sequences.
In particular, the chapter covers the following topics:
  • Bayes' theorem and its applications
  • Bayesian networks
  • Sampling from a Bayesian network:
    • Markov chain Monte Carlo (MCMC), Gibbs, and Metropolis-Hastings
  • Modeling a Bayesian network with PyMC3 and PyStan
  • Hidden Markov Models (HMMs)
  • Examples with the library hmmlearn
Before discussing more advanced topics, we need to introduce the basic concept of Bayesian statistics with a focus on all those aspects that are exploited by the algorithms discussed in the chapter.

Conditional probabilities and Bayes' theorem

If we have a probability space
and two events A and B, the probability of A given B is called conditional probability, and it's defined as:
As the joint probability is commutative, that is, P(A, B) = P(B, A), it's possible to derive Bayes' theorem:
This theorem allows expressing a conditional probability as a function of the opposite one and the two marginal probabilities P(A) and P(B). This result is fundamental to many machine learning problems, because, as we're going to see in this and in the next chapters, normally it's easier to work with a conditional probability (for example, p(A|B)) in order to get the opposite (that is, p(B|A)), but it's hard to work directly with the probability p(B|A). A common form of this theorem can be expressed as:
Let's suppose that we need to estimate the probability of an event A given some observations B, or using the standard notation, the posterior probability of A; the previous formula expresses this value as proportional to the term P(A), which is the marginal probability of A, called prior probability, and the conditional probability of the observations B given the event A. p(B|A) is called likelihood, and defines how event A is likely to determine B. Therefore, we can summarize the relation as:
The proportion is not a limitation, because the term P(B) is always a normalizing constant that can be omitted. Of course, the reader must remember to normalize P(A, B) so that its terms always sum up to one. This is a key concept of Bayesian statistics, where we don't directly trust the prior probability, but we reweight it using the likelihood of our observations. To achieve this goal, we need to introduce the prior probability, which represents the initial knowledge (before observing the data).
This stage is very important and can lead to very different results as a function of the prior families. If the domain knowledge is consolidated, a precise prior distribution allows us to achieve a more accurate posterior distribution. Conversely, if the prior knowledge is limited, it's generally preferable to avoid specific distributions and, instead, default to so-called low- or non-informative priors.
In general, distributions that concentrate the probability in a restricted region are very informative and their entropy is low because the uncertainty is capped by the variance. For example, if we impose a prior Gaussian distribution N(1.0, 0.01), we expect the posterior to be very peaked around the mean. In this case, the likelihood term has a limited ability to change the prior belief, unless the sample size is extremely large. Conversely, if we know that the posterior mean can be found in the range (0.5, 1.5) but we are not sure about the true value, it's preferable to employ a distribution wit...

Table of contents

  1. Why subscribe?
  2. Preface
  3. Machine Learning Model Fundamentals
  4. Loss Functions and Regularization
  5. Introduction to Semi-Supervised Learning
  6. Advanced Semi-Supervised Classification
  7. Graph-Based Semi-Supervised Learning
  8. Clustering and Unsupervised Models
  9. Advanced Clustering and Unsupervised Models
  10. Clustering and Unsupervised Models for Marketing
  11. Generalized Linear Models and Regression
  12. Introduction to Time-Series Analysis
  13. Bayesian Networks and Hidden Markov Models
  14. The EM Algorithm
  15. Component Analysis and Dimensionality Reduction
  16. Hebbian Learning
  17. Fundamentals of Ensemble Learning
  18. Advanced Boosting Algorithms
  19. Modeling Neural Networks
  20. Optimizing Neural Networks
  21. Deep Convolutional Networks
  22. Recurrent Neural Networks
  23. Autoencoders
  24. Introduction to Generative Adversarial Networks
  25. Deep Belief Networks
  26. Introduction to Reinforcement Learning
  27. Advanced Policy Estimation Algorithms
  28. Other Books You May Enjoy