Machine Learning Algorithms
eBook - ePub

Machine Learning Algorithms

Popular algorithms for data science and machine learning, 2nd Edition

Giuseppe Bonaccorso

Condividi libro
  1. 522 pagine
  2. English
  3. ePUB (disponibile sull'app)
  4. Disponibile su iOS e Android
eBook - ePub

Machine Learning Algorithms

Popular algorithms for data science and machine learning, 2nd Edition

Giuseppe Bonaccorso

Dettagli del libro
Anteprima del libro
Indice dei contenuti
Citazioni

Informazioni sul libro

An easy-to-follow, step-by-step guide for getting to grips with the real-world application of machine learning algorithms

Key Features

  • Explore statistics and complex mathematics for data-intensive applications
  • Discover new developments in EM algorithm, PCA, and bayesian regression
  • Study patterns and make predictions across various datasets

Book Description

Machine learning has gained tremendous popularity for its powerful and fast predictions with large datasets. However, the true forces behind its powerful output are the complex algorithms involving substantial statistical analysis that churn large datasets and generate substantial insight.

This second edition of Machine Learning Algorithms walks you through prominent development outcomes that have taken place relating to machine learning algorithms, which constitute major contributions to the machine learning process and help you to strengthen and master statistical interpretation across the areas of supervised, semi-supervised, and reinforcement learning. Once the core concepts of an algorithm have been covered, you'll explore real-world examples based on the most diffused libraries, such as scikit-learn, NLTK, TensorFlow, and Keras. You will discover new topics such as principal component analysis (PCA), independent component analysis (ICA), Bayesian regression, discriminant analysis, advanced clustering, and gaussian mixture.

By the end of this book, you will have studied machine learning algorithms and be able to put them into production to make your machine learning applications more innovative.

What you will learn

  • Study feature selection and the feature engineering process
  • Assess performance and error trade-offs for linear regression
  • Build a data model and understand how it works by using different types of algorithm
  • Learn to tune the parameters of Support Vector Machines (SVM)
  • Explore the concept of natural language processing (NLP) and recommendation systems
  • Create a machine learning architecture from scratch

Who this book is for

Machine Learning Algorithms is for you if you are a machine learning engineer, data engineer, or junior data scientist who wants to advance in the field of predictive analytics and machine learning. Familiarity with R and Python will be an added advantage for getting the best from this book.

Domande frequenti

Come faccio ad annullare l'abbonamento?
È semplicissimo: basta accedere alla sezione Account nelle Impostazioni e cliccare su "Annulla abbonamento". Dopo la cancellazione, l'abbonamento rimarrà attivo per il periodo rimanente già pagato. Per maggiori informazioni, clicca qui
È possibile scaricare libri? Se sì, come?
Al momento è possibile scaricare tramite l'app tutti i nostri libri ePub mobile-friendly. Anche la maggior parte dei nostri PDF è scaricabile e stiamo lavorando per rendere disponibile quanto prima il download di tutti gli altri file. Per maggiori informazioni, clicca qui
Che differenza c'è tra i piani?
Entrambi i piani ti danno accesso illimitato alla libreria e a tutte le funzionalità di Perlego. Le uniche differenze sono il prezzo e il periodo di abbonamento: con il piano annuale risparmierai circa il 30% rispetto a 12 rate con quello mensile.
Cos'è Perlego?
Perlego è un servizio di abbonamento a testi accademici, che ti permette di accedere a un'intera libreria online a un prezzo inferiore rispetto a quello che pagheresti per acquistare un singolo libro al mese. Con oltre 1 milione di testi suddivisi in più di 1.000 categorie, troverai sicuramente ciò che fa per te! Per maggiori informazioni, clicca qui.
Perlego supporta la sintesi vocale?
Cerca l'icona Sintesi vocale nel prossimo libro che leggerai per verificare se è possibile riprodurre l'audio. Questo strumento permette di leggere il testo a voce alta, evidenziandolo man mano che la lettura procede. Puoi aumentare o diminuire la velocità della sintesi vocale, oppure sospendere la riproduzione. Per maggiori informazioni, clicca qui.
Machine Learning Algorithms è disponibile online in formato PDF/ePub?
Sì, puoi accedere a Machine Learning Algorithms di Giuseppe Bonaccorso in formato PDF e/o ePub, così come ad altri libri molto apprezzati nelle sezioni relative a Informatica e Informatica generale. Scopri oltre 1 milione di libri disponibili nel nostro catalogo.

Informazioni

Anno
2018
ISBN
9781789345483
Edizione
2
Argomento
Informatica

Linear Classification Algorithms

This chapter begins by analyzing linear classification problems, with a particular focus on logistic regression (despite its name, it's a classification algorithm) and the stochastic gradient descent (SGD) approach. Even if these strategies appear too simple, they're still the main choices in many classification tasks.
Speaking of which, it's useful to remember a very important philosophical principle: Occam's razor.
In our context, it states that the first choice must always be the simplest and only if it doesn't fit, it's necessary to move on to more complex models. In the second part of the chapter, we're going to discuss some common metrics that are helpful when evaluating a classification task. They are not limited to linear models, so we use them when talking about different strategies as well.
In particular, we are going to discuss the following:
  • The general structure of a linear classification problem
  • Logistic regression (with and without regularization)
  • SGD algorithms and perceptron
  • Passive-aggressive algorithms
  • Grid search of optimal hyperparameters
  • The most important classification metrics
  • The Receiver Operating Characteristic (ROC) curve

Linear classification

Let's consider a generic linear classification problem with two classes. In the following graph, there's an example:
Bidimensional scenario for a linear classification problem
Our goal is to find an optimal hyperplane, that separates the two classes. In multi-class problems, the one-vs-all strategy is normally adopted, so the discussion can focus only on binary classifications. Suppose we have the following dataset made up of n m-dimensional samples:
This dataset is associated with the following target set:
Generally, there are two equivalent options; binary and bipolar outputs and different algorithms are based on the former or the latter without any substantial difference. Normally, the choice is made to simplify the computation and has no impact on the results.
We can now define a weight vector made of m continuous components:
We can also define the quantity, z:
If x is a variable, z is the value determined by the hyperplane equation. Therefore, in a bipolar scenario, if the set of coefficients w that has been determined is correct, the following happens:
When working with binary outputs, the decision is normally made according to a threshold. For example, if the output z ∈ (0, 1), the previous condition becomes the following:
Now, we must find a way to optimize w to reduce the classification error. If such a combination exists (with a certain error threshold), we say that our problem is linearly separable. On the other hand, when it's impossible to find a linear classifier, the problem is defined as non-linearly separable.
A very simple but famous example belonging to the second class is given by the XOR logical operator:
Schema representing the non-linearly-separable problem of binary XOR
As you can see, any line will always include a wrong sample. Hence, to solve this problem, it is necessary to involve non-linear techniques involving high-order curves (for example, two parabolas). However, in many real-life cases, it's possible to use linear techniques (which are often simpler and faster) for non-linear problems too, under the condition of accepting a tolerable misclassification error.

Logistic regression

Even if called regression, this is a classification method that is based on the probability of a sample belonging to a class. As our probabilities must be continuous in and bounded between (0, 1), it's necessary to introduce a threshold function to filter the term z. As already done with linear regression, we can get rid of the extra parameter corresponding to the intercept by adding a 1 element at the end of each input vector:
In this way, we can consider a single parameter vector θ, containing m + 1 elements, and compute the z-value with a dot product:
Now, let's suppose we introduce the probability p(xi) that an element belongs to class 1. Clearly, the same element belongs to class 0 with a probability 1 - p(xi). Logistic regression is mainly based on the idea of modeling the odds of belonging to class 1 using an exponential function:
This function is continuous and differentiable on , always positive, and tends to infinite when the argument x → ∞. These conditions are necessary to correctly represent the odds, because when p → 0, odds → 0, but when p → 1, odds → ∞. If we take the logit (which is the natural logarithm of the odds),...

Indice dei contenuti