Machine Learning Algorithms
eBook - ePub

Machine Learning Algorithms

Popular algorithms for data science and machine learning, 2nd Edition

Giuseppe Bonaccorso

Compartir libro
  1. 522 páginas
  2. English
  3. ePUB (apto para móviles)
  4. Disponible en iOS y Android
eBook - ePub

Machine Learning Algorithms

Popular algorithms for data science and machine learning, 2nd Edition

Giuseppe Bonaccorso

Detalles del libro
Vista previa del libro
Índice
Citas

Información del libro

An easy-to-follow, step-by-step guide for getting to grips with the real-world application of machine learning algorithms

Key Features

  • Explore statistics and complex mathematics for data-intensive applications
  • Discover new developments in EM algorithm, PCA, and bayesian regression
  • Study patterns and make predictions across various datasets

Book Description

Machine learning has gained tremendous popularity for its powerful and fast predictions with large datasets. However, the true forces behind its powerful output are the complex algorithms involving substantial statistical analysis that churn large datasets and generate substantial insight.

This second edition of Machine Learning Algorithms walks you through prominent development outcomes that have taken place relating to machine learning algorithms, which constitute major contributions to the machine learning process and help you to strengthen and master statistical interpretation across the areas of supervised, semi-supervised, and reinforcement learning. Once the core concepts of an algorithm have been covered, you'll explore real-world examples based on the most diffused libraries, such as scikit-learn, NLTK, TensorFlow, and Keras. You will discover new topics such as principal component analysis (PCA), independent component analysis (ICA), Bayesian regression, discriminant analysis, advanced clustering, and gaussian mixture.

By the end of this book, you will have studied machine learning algorithms and be able to put them into production to make your machine learning applications more innovative.

What you will learn

  • Study feature selection and the feature engineering process
  • Assess performance and error trade-offs for linear regression
  • Build a data model and understand how it works by using different types of algorithm
  • Learn to tune the parameters of Support Vector Machines (SVM)
  • Explore the concept of natural language processing (NLP) and recommendation systems
  • Create a machine learning architecture from scratch

Who this book is for

Machine Learning Algorithms is for you if you are a machine learning engineer, data engineer, or junior data scientist who wants to advance in the field of predictive analytics and machine learning. Familiarity with R and Python will be an added advantage for getting the best from this book.

Preguntas frecuentes

¿Cómo cancelo mi suscripción?
Simplemente, dirígete a la sección ajustes de la cuenta y haz clic en «Cancelar suscripción». Así de sencillo. Después de cancelar tu suscripción, esta permanecerá activa el tiempo restante que hayas pagado. Obtén más información aquí.
¿Cómo descargo los libros?
Por el momento, todos nuestros libros ePub adaptables a dispositivos móviles se pueden descargar a través de la aplicación. La mayor parte de nuestros PDF también se puede descargar y ya estamos trabajando para que el resto también sea descargable. Obtén más información aquí.
¿En qué se diferencian los planes de precios?
Ambos planes te permiten acceder por completo a la biblioteca y a todas las funciones de Perlego. Las únicas diferencias son el precio y el período de suscripción: con el plan anual ahorrarás en torno a un 30 % en comparación con 12 meses de un plan mensual.
¿Qué es Perlego?
Somos un servicio de suscripción de libros de texto en línea que te permite acceder a toda una biblioteca en línea por menos de lo que cuesta un libro al mes. Con más de un millón de libros sobre más de 1000 categorías, ¡tenemos todo lo que necesitas! Obtén más información aquí.
¿Perlego ofrece la función de texto a voz?
Busca el símbolo de lectura en voz alta en tu próximo libro para ver si puedes escucharlo. La herramienta de lectura en voz alta lee el texto en voz alta por ti, resaltando el texto a medida que se lee. Puedes pausarla, acelerarla y ralentizarla. Obtén más información aquí.
¿Es Machine Learning Algorithms un PDF/ePUB en línea?
Sí, puedes acceder a Machine Learning Algorithms de Giuseppe Bonaccorso en formato PDF o ePUB, así como a otros libros populares de Informatica y Informatica generale. Tenemos más de un millón de libros disponibles en nuestro catálogo para que explores.

Información

Año
2018
ISBN
9781789345483
Edición
2
Categoría
Informatica

Linear Classification Algorithms

This chapter begins by analyzing linear classification problems, with a particular focus on logistic regression (despite its name, it's a classification algorithm) and the stochastic gradient descent (SGD) approach. Even if these strategies appear too simple, they're still the main choices in many classification tasks.
Speaking of which, it's useful to remember a very important philosophical principle: Occam's razor.
In our context, it states that the first choice must always be the simplest and only if it doesn't fit, it's necessary to move on to more complex models. In the second part of the chapter, we're going to discuss some common metrics that are helpful when evaluating a classification task. They are not limited to linear models, so we use them when talking about different strategies as well.
In particular, we are going to discuss the following:
  • The general structure of a linear classification problem
  • Logistic regression (with and without regularization)
  • SGD algorithms and perceptron
  • Passive-aggressive algorithms
  • Grid search of optimal hyperparameters
  • The most important classification metrics
  • The Receiver Operating Characteristic (ROC) curve

Linear classification

Let's consider a generic linear classification problem with two classes. In the following graph, there's an example:
Bidimensional scenario for a linear classification problem
Our goal is to find an optimal hyperplane, that separates the two classes. In multi-class problems, the one-vs-all strategy is normally adopted, so the discussion can focus only on binary classifications. Suppose we have the following dataset made up of n m-dimensional samples:
This dataset is associated with the following target set:
Generally, there are two equivalent options; binary and bipolar outputs and different algorithms are based on the former or the latter without any substantial difference. Normally, the choice is made to simplify the computation and has no impact on the results.
We can now define a weight vector made of m continuous components:
We can also define the quantity, z:
If x is a variable, z is the value determined by the hyperplane equation. Therefore, in a bipolar scenario, if the set of coefficients w that has been determined is correct, the following happens:
When working with binary outputs, the decision is normally made according to a threshold. For example, if the output z ∈ (0, 1), the previous condition becomes the following:
Now, we must find a way to optimize w to reduce the classification error. If such a combination exists (with a certain error threshold), we say that our problem is linearly separable. On the other hand, when it's impossible to find a linear classifier, the problem is defined as non-linearly separable.
A very simple but famous example belonging to the second class is given by the XOR logical operator:
Schema representing the non-linearly-separable problem of binary XOR
As you can see, any line will always include a wrong sample. Hence, to solve this problem, it is necessary to involve non-linear techniques involving high-order curves (for example, two parabolas). However, in many real-life cases, it's possible to use linear techniques (which are often simpler and faster) for non-linear problems too, under the condition of accepting a tolerable misclassification error.

Logistic regression

Even if called regression, this is a classification method that is based on the probability of a sample belonging to a class. As our probabilities must be continuous in and bounded between (0, 1), it's necessary to introduce a threshold function to filter the term z. As already done with linear regression, we can get rid of the extra parameter corresponding to the intercept by adding a 1 element at the end of each input vector:
In this way, we can consider a single parameter vector θ, containing m + 1 elements, and compute the z-value with a dot product:
Now, let's suppose we introduce the probability p(xi) that an element belongs to class 1. Clearly, the same element belongs to class 0 with a probability 1 - p(xi). Logistic regression is mainly based on the idea of modeling the odds of belonging to class 1 using an exponential function:
This function is continuous and differentiable on , always positive, and tends to infinite when the argument x → ∞. These conditions are necessary to correctly represent the odds, because when p → 0, odds → 0, but when p → 1, odds → ∞. If we take the logit (which is the natural logarithm of the odds),...

Índice