Python Feature Engineering Cookbook
eBook - ePub

Python Feature Engineering Cookbook

Over 70 recipes for creating, engineering, and transforming features to build machine learning models

  1. 372 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

Python Feature Engineering Cookbook

Over 70 recipes for creating, engineering, and transforming features to build machine learning models

About this book

Extract accurate information from data to train and improve machine learning models using NumPy, SciPy, pandas, and scikit-learn libraries

Key Features

  • Discover solutions for feature generation, feature extraction, and feature selection
  • Uncover the end-to-end feature engineering process across continuous, discrete, and unstructured datasets
  • Implement modern feature extraction techniques using Python's pandas, scikit-learn, SciPy and NumPy libraries

Book Description

Feature engineering is invaluable for developing and enriching your machine learning models. In this cookbook, you will work with the best tools to streamline your feature engineering pipelines and techniques and simplify and improve the quality of your code.

Using Python libraries such as pandas, scikit-learn, Featuretools, and Feature-engine, you'll learn how to work with both continuous and discrete datasets and be able to transform features from unstructured datasets. You will develop the skills necessary to select the best features as well as the most suitable extraction techniques. This book will cover Python recipes that will help you automate feature engineering to simplify complex processes. You'll also get to grips with different feature engineering strategies, such as the box-cox transform, power transform, and log transform across machine learning, reinforcement learning, and natural language processing (NLP) domains.

By the end of this book, you'll have discovered tips and practical solutions to all of your feature engineering problems.

What you will learn

  • Simplify your feature engineering pipelines with powerful Python packages
  • Get to grips with imputing missing values
  • Encode categorical variables with a wide set of techniques
  • Extract insights from text quickly and effortlessly
  • Develop features from transactional data and time series data
  • Derive new features by combining existing variables
  • Understand how to transform, discretize, and scale your variables
  • Create informative variables from date and time

Who this book is for

This book is for machine learning professionals, AI engineers, data scientists, and NLP and reinforcement learning engineers who want to optimize and enrich their machine learning models with the best features. Knowledge of machine learning and Python coding will assist you with understanding the concepts covered in this book.

Frequently asked questions

Yes, you can cancel anytime from the Subscription tab in your account settings on the Perlego website. Your subscription will stay active until the end of your current billing period. Learn how to cancel your subscription.
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
Perlego offers two plans: Essential and Complete
  • Essential is ideal for learners and professionals who enjoy exploring a wide range of subjects. Access the Essential Library with 800,000+ trusted titles and best-sellers across business, personal growth, and the humanities. Includes unlimited reading time and Standard Read Aloud voice.
  • Complete: Perfect for advanced learners and researchers needing full, unrestricted access. Unlock 1.4M+ books across hundreds of subjects, including academic and specialized titles. The Complete Plan also includes advanced features like Premium Read Aloud and Research Assistant.
Both plans are available with monthly, semester, or annual billing cycles.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes! You can use the Perlego app on both iOS or Android devices to read anytime, anywhere — even offline. Perfect for commutes or when you’re on the go.
Please note we cannot support devices running on iOS 13 and Android 7 or earlier. Learn more about using the app.
Yes, you can access Python Feature Engineering Cookbook by Soledad Galli in PDF and/or ePUB format, as well as other popular books in Computer Science & Data Mining. We have over one million books available in our catalogue for you to explore.

Information

Encoding Categorical Variables

Categorical variables are those values which are selected from a group of categories or labels. For example, the variable Gender with the values of male or female is categorical, and so is the variable marital status with the values of never married, married, divorced, or widowed. In some categorical variables, the labels have an intrinsic order, for example, in the variable Student's grade, the values of A, B, C, or Fail are ordered, A being the highest grade and Fail the lowest. These are called ordinal categorical variables. Variables in which the categories do not have an intrinsic order are called nominal categorical variables, such as the variable City, with the values of London, Manchester, Bristol, and so on.
The values of categorical variables are often encoded as strings. Scikit-learn, the open source Python library for machine learning, does not support strings as values, therefore, we need to transform those strings into numbers. The act of replacing strings with numbers is called categorical encoding. In this chapter, we will discuss multiple categorical encoding techniques.
This chapter will cover the following recipes:
  • Creating binary variables through one-hot encoding
  • Performing one-hot encoding of frequent categories
  • Replacing categories with ordinal numbers
  • Replacing categories with counts or frequency of observations
  • Encoding with integers in an ordered manner
  • Encoding with the mean of the target
  • Encoding with the Weight of Evidence
  • Grouping rare or infrequent categories
  • Performing binary encoding
  • Performing feature hashing

Technical requirements

In this chapter, we will use the following Python libraries: pandas, NumPy, Matplotlib, and scikit-learn. I recommend installing the free Anaconda Python distribution, which contains all of these packages.
For details on how to install the Anaconda Python distribution, visit the Technical requirements section in Chapter 1, Foreseeing Variable Problems in Building ML Models.
We will also use the open source Python library's feature-engine and category encoders, which can be installed using pip:
pip install feature-engine
pip install category_encoders
To learn more about Feature-engine, visit the following sites:
  • Home page: https://www.trainindata.com/feature-engine
  • GitHub: https://github.com/solegalli/feature_engine/
  • Documentation: https://feature-engine.readthedocs.io
To learn more about category encoders, visit the following:
  • Documentation: https://contrib.scikit-learn.org/categorical-encoding/
To run the recipes successfully, check that you have the same or higher versions of the Python libraries indicated in the requirement.txt file in the accompanying GitHub repository at https://github.com/PacktPublishing/Python-Feature-Engineering-Cookbook.
We will use the Credit Approval Dataset available in the UCI Machine Learning Repository, available at https://archive.ics.uci.edu/ml/datasets/credit+approval.
Dua, D. and Graff, C. (2019). UCI Machine Learning Repository [http://archive.ics.uci.edu/ml]. Irvine, CA: University of California, School of Information and Computer Science.
To prepare the dataset, follow these steps:
  1. Visit http://archive.ics.uci.edu/ml/machine-learning-databases/credit-screening/.
  2. Click on crx.data to download the data:
  1. Save crx.data to the folder from which you will run the following commands.
After downloading the data, open up a Jupyter Notebook or a Python IDE and run the following commands.
  1. Import the required libraries:
import random
import pandas as pd
import numpy as np
  1. Load the data:
data = pd.read_csv('crx.data', header=None)
  1. Create a list with the variable names:
varnames = ['A'+str(s) for s in range(1,17)]
  1. Add the variable names to the dataframe:
data.columns = varnames
  1. Replace the question marks in the dataset with NumPy NaN values:
data = data.replace('?', np.nan)
  1. Re-cast numerical variables to float types:
data['A2'] = data['A2'].astype('float')
data['A14'] = data['A14'].astype('float')
  1. Re-code the target variable as binary:
data['A16'] = data['A16'].map({'+':1, '-':0})
  1. Make lists with categorical and numerical variables:
cat_cols = [c for c in data.columns if data[c].dtypes=='O']
num_cols = [c for c in data.columns if data[c].dtypes!='O']
  1. Fill in the missing data:
data[num_cols] = data[num_cols].fillna(0)
data[cat_cols] = data[cat_cols].fillna('Missing')
  1. Save the prepared data:
data.to_csv('creditApprovalUCI.csv', index=False)
You can find a Jupyter Notebook with these commands in the accompanying GitHub repository at https://github.com/PacktPublishing/Python-Feature-Engineering-Cookbook.

Creating binary variables through one-hot encoding

In one-hot encoding, we represent a categorical variable as a group of binary variables, where each binary variable represents one category. The binary variable indicates whether the category is present in an observation (1) or not (0). The following table shows the one-hot encoded representation of the Gender variable with the categories of Male and Female:
Gender Female Male
Female 1 0
Male 0 1
Male 0 1
Female 1 0
Female 1 0
As shown in the table, from the Gender variable, we can derive the binary variable of Female, which shows the value of 1 for females, or the binary variable of Male, which takes the value of 1 for the males in the dataset.
For the categorical variable of Color with the values of red, blue, and green, we can create three variables called red, blue, and green. These variables will take the value of 1 if the observation is red, blue, or green, respectively, or 0 otherwise.
A categorical variable with k unique categories can be encoded in k-1 binary variables. For Gender, k is 2 as it contains two labels (male and female), therefore, we need to create only one binary variable (k - 1 = 1) to capture all of the information. For the color variable, which has three categories (k=3; red, blue, and green), we need to create two (k - 1 = 2) binary variables to capture all the information, so that the following occurs:
  • If the observation is red, it will be captured by the variable red (red = 1, blue = 0).
  • If the observation is blue, it will be captured by the variable blue (red = 0, blue = 1).
  • If the observation is green, it will be captured by the combination of red and blue (red = 0, blue = 0).
There are a few occasions in which we may prefer to encode the categorical variables with k binary variables:
  • When training decision trees, as they do not evaluate the entire feature space at the same time
  • When selecting features recursively
  • Wh...

Table of contents

  1. Title Page
  2. Copyright and Credits
  3. About Packt
  4. Contributors
  5. Preface
  6. Foreseeing Variable Problems When Building ML Models
  7. Imputing Missing Data
  8. Encoding Categorical Variables
  9. Transforming Numerical Variables
  10. Performing Variable Discretization
  11. Working with Outliers
  12. Deriving Features from Dates and Time Variables
  13. Performing Feature Scaling
  14. Applying Mathematical Computations to Features
  15. Creating Features with Transactional and Time Series Data
  16. Extracting Features from Text Variables
  17. Other Books You May Enjoy