Machine Learning Algorithms
eBook - ePub

Machine Learning Algorithms

Popular algorithms for data science and machine learning, 2nd Edition

Giuseppe Bonaccorso

Share book
  1. 522 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

Machine Learning Algorithms

Popular algorithms for data science and machine learning, 2nd Edition

Giuseppe Bonaccorso

Book details
Book preview
Table of contents
Citations

About This Book

An easy-to-follow, step-by-step guide for getting to grips with the real-world application of machine learning algorithms

Key Features

  • Explore statistics and complex mathematics for data-intensive applications
  • Discover new developments in EM algorithm, PCA, and bayesian regression
  • Study patterns and make predictions across various datasets

Book Description

Machine learning has gained tremendous popularity for its powerful and fast predictions with large datasets. However, the true forces behind its powerful output are the complex algorithms involving substantial statistical analysis that churn large datasets and generate substantial insight.

This second edition of Machine Learning Algorithms walks you through prominent development outcomes that have taken place relating to machine learning algorithms, which constitute major contributions to the machine learning process and help you to strengthen and master statistical interpretation across the areas of supervised, semi-supervised, and reinforcement learning. Once the core concepts of an algorithm have been covered, you'll explore real-world examples based on the most diffused libraries, such as scikit-learn, NLTK, TensorFlow, and Keras. You will discover new topics such as principal component analysis (PCA), independent component analysis (ICA), Bayesian regression, discriminant analysis, advanced clustering, and gaussian mixture.

By the end of this book, you will have studied machine learning algorithms and be able to put them into production to make your machine learning applications more innovative.

What you will learn

  • Study feature selection and the feature engineering process
  • Assess performance and error trade-offs for linear regression
  • Build a data model and understand how it works by using different types of algorithm
  • Learn to tune the parameters of Support Vector Machines (SVM)
  • Explore the concept of natural language processing (NLP) and recommendation systems
  • Create a machine learning architecture from scratch

Who this book is for

Machine Learning Algorithms is for you if you are a machine learning engineer, data engineer, or junior data scientist who wants to advance in the field of predictive analytics and machine learning. Familiarity with R and Python will be an added advantage for getting the best from this book.

Frequently asked questions

How do I cancel my subscription?
Simply head over to the account section in settings and click on “Cancel Subscription” - it’s as simple as that. After you cancel, your membership will stay active for the remainder of the time you’ve paid for. Learn more here.
Can/how do I download books?
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
What is the difference between the pricing plans?
Both plans give you full access to the library and all of Perlego’s features. The only differences are the price and subscription period: With the annual plan you’ll save around 30% compared to 12 months on the monthly plan.
What is Perlego?
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Do you support text-to-speech?
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Is Machine Learning Algorithms an online PDF/ePUB?
Yes, you can access Machine Learning Algorithms by Giuseppe Bonaccorso in PDF and/or ePUB format, as well as other popular books in Informatica & Informatica generale. We have over one million books available in our catalogue for you to explore.

Information

Year
2018
ISBN
9781789345483

Linear Classification Algorithms

This chapter begins by analyzing linear classification problems, with a particular focus on logistic regression (despite its name, it's a classification algorithm) and the stochastic gradient descent (SGD) approach. Even if these strategies appear too simple, they're still the main choices in many classification tasks.
Speaking of which, it's useful to remember a very important philosophical principle: Occam's razor.
In our context, it states that the first choice must always be the simplest and only if it doesn't fit, it's necessary to move on to more complex models. In the second part of the chapter, we're going to discuss some common metrics that are helpful when evaluating a classification task. They are not limited to linear models, so we use them when talking about different strategies as well.
In particular, we are going to discuss the following:
  • The general structure of a linear classification problem
  • Logistic regression (with and without regularization)
  • SGD algorithms and perceptron
  • Passive-aggressive algorithms
  • Grid search of optimal hyperparameters
  • The most important classification metrics
  • The Receiver Operating Characteristic (ROC) curve

Linear classification

Let's consider a generic linear classification problem with two classes. In the following graph, there's an example:
Bidimensional scenario for a linear classification problem
Our goal is to find an optimal hyperplane, that separates the two classes. In multi-class problems, the one-vs-all strategy is normally adopted, so the discussion can focus only on binary classifications. Suppose we have the following dataset made up of n m-dimensional samples:
This dataset is associated with the following target set:
Generally, there are two equivalent options; binary and bipolar outputs and different algorithms are based on the former or the latter without any substantial difference. Normally, the choice is made to simplify the computation and has no impact on the results.
We can now define a weight vector made of m continuous components:
We can also define the quantity, z:
If x is a variable, z is the value determined by the hyperplane equation. Therefore, in a bipolar scenario, if the set of coefficients w that has been determined is correct, the following happens:
When working with binary outputs, the decision is normally made according to a threshold. For example, if the output z ∈ (0, 1), the previous condition becomes the following:
Now, we must find a way to optimize w to reduce the classification error. If such a combination exists (with a certain error threshold), we say that our problem is linearly separable. On the other hand, when it's impossible to find a linear classifier, the problem is defined as non-linearly separable.
A very simple but famous example belonging to the second class is given by the XOR logical operator:
Schema representing the non-linearly-separable problem of binary XOR
As you can see, any line will always include a wrong sample. Hence, to solve this problem, it is necessary to involve non-linear techniques involving high-order curves (for example, two parabolas). However, in many real-life cases, it's possible to use linear techniques (which are often simpler and faster) for non-linear problems too, under the condition of accepting a tolerable misclassification error.

Logistic regression

Even if called regression, this is a classification method that is based on the probability of a sample belonging to a class. As our probabilities must be continuous in and bounded between (0, 1), it's necessary to introduce a threshold function to filter the term z. As already done with linear regression, we can get rid of the extra parameter corresponding to the intercept by adding a 1 element at the end of each input vector:
In this way, we can consider a single parameter vector θ, containing m + 1 elements, and compute the z-value with a dot product:
Now, let's suppose we introduce the probability p(xi) that an element belongs to class 1. Clearly, the same element belongs to class 0 with a probability 1 - p(xi). Logistic regression is mainly based on the idea of modeling the odds of belonging to class 1 using an exponential function:
This function is continuous and differentiable on , always positive, and tends to infinite when the argument x → ∞. These conditions are necessary to correctly represent the odds, because when p → 0, odds → 0, but when p → 1, odds → ∞. If we take the logit (which is the natural logarithm of the odds),...

Table of contents