Hands-On Unsupervised Learning with Python
Implement machine learning and deep learning models using Scikit-Learn, TensorFlow, and more
Giuseppe Bonaccorso
- 386 pages
- English
- ePUB (mobile friendly)
- Available on iOS & Android
Hands-On Unsupervised Learning with Python
Implement machine learning and deep learning models using Scikit-Learn, TensorFlow, and more
Giuseppe Bonaccorso
About This Book
Discover the skill-sets required to implement various approaches to Machine Learning with Python
Key Features
- Explore unsupervised learning with clustering, autoencoders, restricted Boltzmann machines, and more
- Build your own neural network models using modern Python libraries
- Practical examples show you how to implement different machine learning and deep learning techniques
Book Description
Unsupervised learning is about making use of raw, untagged data and applying learning algorithms to it to help a machine predict its outcome. With this book, you will explore the concept of unsupervised learning to cluster large sets of data and analyze them repeatedly until the desired outcome is found using Python.
This book starts with the key differences between supervised, unsupervised, and semi-supervised learning. You will be introduced to the best-used libraries and frameworks from the Python ecosystem and address unsupervised learning in both the machine learning and deep learning domains. You will explore various algorithms, techniques that are used to implement unsupervised learning in real-world use cases. You will learn a variety of unsupervised learning approaches, including randomized optimization, clustering, feature selection and transformation, and information theory. You will get hands-on experience with how neural networks can be employed in unsupervised scenarios. You will also explore the steps involved in building and training a GAN in order to process images.
By the end of this book, you will have learned the art of unsupervised learning for different real-world challenges.
What you will learn
- Use cluster algorithms to identify and optimize natural groups of data
- Explore advanced non-linear and hierarchical clustering in action
- Soft label assignments for fuzzy c-means and Gaussian mixture models
- Detect anomalies through density estimation
- Perform principal component analysis using neural network models
- Create unsupervised models using GANs
Who this book is for
This book is intended for statisticians, data scientists, machine learning developers, and deep learning practitioners who want to build smart applications by implementing key building block unsupervised learning, and master all the new techniques and algorithms offered in machine learning and deep learning using real-world examples. Some prior knowledge of machine learning concepts and statistics is desirable.
Frequently asked questions
Information
Clustering Fundamentals
- An introduction to clustering and distance functions
- K-means and K-means++
- Evaluation metrics
- K-Nearest Neighbors (KNN)
- Vector Quantization (VQ)
Technical requirements
- Python 3.5+ (Anaconda distribution: https://www.anaconda.com/distribution/ is highly recommended)
- Libraries:
- SciPy 0.19+
- NumPy 1.10+
- scikit-learn 0.20+
- pandas 0.22+
- Matplotlib 2.0+
- seaborn 0.9+
Introduction to clustering
- Hard clustering: In this case, each sample xp ∈ X is assigned to a cluster Ki and Ki ∩ Kj = ∅ for i ≠ j. The majority of algorithms we are going to discuss belong to this category. In this case, the problem can be expressed as a parameterized function that assigns a cluster to each input sample:
- Soft clustering: It is often subdivided into probabilistic and fuzzy clustering and such an approach determines the probability p(x) of every sample xp ∈ X belonging to predetermined clusters. Hence, if there are K clusters, we have a probability vector p(x) = [p1(x), p2(x), ..., pk(x)], where pi(x) represents the probability of being assigned to the cluster i. In this case, the clusters are not disjointed and, generally, a sample will belong to all clusters with a membership degree that is equivalent to a probability (this concept is peculiar to fuzzy clustering).