Hands-On Unsupervised Learning with Python
eBook - ePub

Hands-On Unsupervised Learning with Python

Implement machine learning and deep learning models using Scikit-Learn, TensorFlow, and more

Giuseppe Bonaccorso

Partager le livre
  1. 386 pages
  2. English
  3. ePUB (adapté aux mobiles)
  4. Disponible sur iOS et Android
eBook - ePub

Hands-On Unsupervised Learning with Python

Implement machine learning and deep learning models using Scikit-Learn, TensorFlow, and more

Giuseppe Bonaccorso

DĂ©tails du livre
Aperçu du livre
Table des matiĂšres
Citations

À propos de ce livre

Discover the skill-sets required to implement various approaches to Machine Learning with Python

Key Features

  • Explore unsupervised learning with clustering, autoencoders, restricted Boltzmann machines, and more
  • Build your own neural network models using modern Python libraries
  • Practical examples show you how to implement different machine learning and deep learning techniques

Book Description

Unsupervised learning is about making use of raw, untagged data and applying learning algorithms to it to help a machine predict its outcome. With this book, you will explore the concept of unsupervised learning to cluster large sets of data and analyze them repeatedly until the desired outcome is found using Python.

This book starts with the key differences between supervised, unsupervised, and semi-supervised learning. You will be introduced to the best-used libraries and frameworks from the Python ecosystem and address unsupervised learning in both the machine learning and deep learning domains. You will explore various algorithms, techniques that are used to implement unsupervised learning in real-world use cases. You will learn a variety of unsupervised learning approaches, including randomized optimization, clustering, feature selection and transformation, and information theory. You will get hands-on experience with how neural networks can be employed in unsupervised scenarios. You will also explore the steps involved in building and training a GAN in order to process images.

By the end of this book, you will have learned the art of unsupervised learning for different real-world challenges.

What you will learn

  • Use cluster algorithms to identify and optimize natural groups of data
  • Explore advanced non-linear and hierarchical clustering in action
  • Soft label assignments for fuzzy c-means and Gaussian mixture models
  • Detect anomalies through density estimation
  • Perform principal component analysis using neural network models
  • Create unsupervised models using GANs

Who this book is for

This book is intended for statisticians, data scientists, machine learning developers, and deep learning practitioners who want to build smart applications by implementing key building block unsupervised learning, and master all the new techniques and algorithms offered in machine learning and deep learning using real-world examples. Some prior knowledge of machine learning concepts and statistics is desirable.

Foire aux questions

Comment puis-je résilier mon abonnement ?
Il vous suffit de vous rendre dans la section compte dans paramĂštres et de cliquer sur « RĂ©silier l’abonnement ». C’est aussi simple que cela ! Une fois que vous aurez rĂ©siliĂ© votre abonnement, il restera actif pour le reste de la pĂ©riode pour laquelle vous avez payĂ©. DĂ©couvrez-en plus ici.
Puis-je / comment puis-je télécharger des livres ?
Pour le moment, tous nos livres en format ePub adaptĂ©s aux mobiles peuvent ĂȘtre tĂ©lĂ©chargĂ©s via l’application. La plupart de nos PDF sont Ă©galement disponibles en tĂ©lĂ©chargement et les autres seront tĂ©lĂ©chargeables trĂšs prochainement. DĂ©couvrez-en plus ici.
Quelle est la différence entre les formules tarifaires ?
Les deux abonnements vous donnent un accĂšs complet Ă  la bibliothĂšque et Ă  toutes les fonctionnalitĂ©s de Perlego. Les seules diffĂ©rences sont les tarifs ainsi que la pĂ©riode d’abonnement : avec l’abonnement annuel, vous Ă©conomiserez environ 30 % par rapport Ă  12 mois d’abonnement mensuel.
Qu’est-ce que Perlego ?
Nous sommes un service d’abonnement Ă  des ouvrages universitaires en ligne, oĂč vous pouvez accĂ©der Ă  toute une bibliothĂšque pour un prix infĂ©rieur Ă  celui d’un seul livre par mois. Avec plus d’un million de livres sur plus de 1 000 sujets, nous avons ce qu’il vous faut ! DĂ©couvrez-en plus ici.
Prenez-vous en charge la synthÚse vocale ?
Recherchez le symbole Écouter sur votre prochain livre pour voir si vous pouvez l’écouter. L’outil Écouter lit le texte Ă  haute voix pour vous, en surlignant le passage qui est en cours de lecture. Vous pouvez le mettre sur pause, l’accĂ©lĂ©rer ou le ralentir. DĂ©couvrez-en plus ici.
Est-ce que Hands-On Unsupervised Learning with Python est un PDF/ePUB en ligne ?
Oui, vous pouvez accĂ©der Ă  Hands-On Unsupervised Learning with Python par Giuseppe Bonaccorso en format PDF et/ou ePUB ainsi qu’à d’autres livres populaires dans Computer Science et Natural Language Processing. Nous disposons de plus d’un million d’ouvrages Ă  dĂ©couvrir dans notre catalogue.

Informations

Année
2019
ISBN
9781789349276

Clustering Fundamentals

In this chapter, we are going to introduce the fundamental concept of cluster analysis, focusing the attention on our main principles that are shared by many algorithms and the most important techniques that can be employed to evaluate the performance of a method.
In particular, we are going to discuss:
  • An introduction to clustering and distance functions
  • K-means and K-means++
  • Evaluation metrics
  • K-Nearest Neighbors (KNN)
  • Vector Quantization (VQ)

Technical requirements

The code presented in this chapter requires:
  • Python 3.5+ (Anaconda distribution: https://www.anaconda.com/distribution/ is highly recommended)
  • Libraries:
    • SciPy 0.19+
    • NumPy 1.10+
    • scikit-learn 0.20+
    • pandas 0.22+
    • Matplotlib 2.0+
    • seaborn 0.9+
The dataset can be obtained through UCI. The CSV file can be downloaded from https://archive.ics.uci.edu/ml/machine-learning-databases/breast-cancer-wisconsin/wdbc.data and doesn't need any preprocessing except for the addition of the column names that will occur during the loading stage.
The examples are available on the GitHub repository:
https://github.com/PacktPublishing/HandsOn-Unsupervised-Learning-with-Python/Chapter02.

Introduction to clustering

As we explained in Chapter 1, Getting Started with Unsupervised Learning, the main goal of a cluster analysis is to group the elements of a dataset according to a similarity measure or a proximity criterion. In the first part of this chapter, we are going to focus on the former approach, while in the second part and in the next chapter, we will analyze more generic methods that exploit other geometric features of the dataset.
Let's take a data generating process pdata(x) and draw N samples from it:
It's possible to assume that the probability space of pdata(x) is partitionable into (potentially infinite) configurations containing K (for K=1,2, ...) regions so that pdata(x; k) represents the probability of a sample belonging to a cluster k. In this way, we are stating that every possible clustering structure is already existing when pdata(x) is determined. Further assumptions can be made on the clustering probability distribution that better approximate pdata(x) (as we're going to see in Chapter 5, Soft Clustering and Gaussian Mixture Models). However, as we are trying to split the probability space (and the corresponding samples) into cohesive groups, we can assume two possible strategies:
  • Hard clustering: In this case, each sample xp ∈ X is assigned to a cluster Ki and Ki ∩ Kj = ∅ for i ≠ j. The majority of algorithms we are going to discuss belong to this category. In this case, the problem can be expressed as a parameterized function that assigns a cluster to each input sample:
  • Soft clustering: It is often subdivided into probabilistic and fuzzy clustering and such an approach determines the probability p(x) of every sample xp ∈ X belonging to predetermined clusters. Hence, if there are K clusters, we have a probability vector p(x) = [p1(x), p2(x), ..., pk(x)], where pi(x) represents the probability of being assigned to the cluster i. In this case, the clusters are not disjointed and, generally, a sample will belong to all clusters with a membership degree that is equivalent to a probability (this concept is peculiar to fuzzy clustering).
For our purposes, in this chapter we simply assume that the dataset X is drawn from a data generating process whose space, given a metric function, is splittable into compact regions separated from each other. In fact, our main objective is to find K clusters that satisfy the double property of maximum cohesion and maximum separation. This concept will be clearer when discussing the K-means algorithm. However, it's possible to imagine clusters as blobs whose density is much higher than the one observable in the space separating two or more of them, as shown in the following diagram:
Bidimensional clustering structure obeying the rule of maximum cohesion and maximum separation. Nk represents the number of samples belonging to the cluster k while Nout(r) is the number of samples that are outside the balls centered at each cluster center with a maximum radius r
In the preceding diagram, we are assuming that the majority of samples will be captured by one of the balls, considering the maximum distance of a sample from the center. However, as we don't want to impose any restriction on the growth of a ball (that is, it can contain any number of samples), it's preferable not to consider the radius and to evaluate the separating region by sampling small subregions (of the whole space) and collecting their densities.
In a perfect scenario, the clusters span some subregions whose density is D, while the separating region is characterized by a density d << D. The discussion about geometric properties can become extremely complex and, in many cases, it's extremely theoretical. Henceforth, we consider only the distance between the closest points belonging to different clusters. If this value is much smaller than the maximum distance between a sample and its cluster center for all clusters, we can be sure that the separation is effective and it's easy to distinguish between clusters and separating regions. Instead, when...

Table des matiĂšres