
Python Feature Engineering Cookbook
Over 70 recipes for creating, engineering, and transforming features to build machine learning models
- 372 pages
- English
- ePUB (mobile friendly)
- Available on iOS & Android
Python Feature Engineering Cookbook
Over 70 recipes for creating, engineering, and transforming features to build machine learning models
About this book
Extract accurate information from data to train and improve machine learning models using NumPy, SciPy, pandas, and scikit-learn libraries
Key Features
- Discover solutions for feature generation, feature extraction, and feature selection
- Uncover the end-to-end feature engineering process across continuous, discrete, and unstructured datasets
- Implement modern feature extraction techniques using Python's pandas, scikit-learn, SciPy and NumPy libraries
Book Description
Feature engineering is invaluable for developing and enriching your machine learning models. In this cookbook, you will work with the best tools to streamline your feature engineering pipelines and techniques and simplify and improve the quality of your code.
Using Python libraries such as pandas, scikit-learn, Featuretools, and Feature-engine, you'll learn how to work with both continuous and discrete datasets and be able to transform features from unstructured datasets. You will develop the skills necessary to select the best features as well as the most suitable extraction techniques. This book will cover Python recipes that will help you automate feature engineering to simplify complex processes. You'll also get to grips with different feature engineering strategies, such as the box-cox transform, power transform, and log transform across machine learning, reinforcement learning, and natural language processing (NLP) domains.
By the end of this book, you'll have discovered tips and practical solutions to all of your feature engineering problems.
What you will learn
- Simplify your feature engineering pipelines with powerful Python packages
- Get to grips with imputing missing values
- Encode categorical variables with a wide set of techniques
- Extract insights from text quickly and effortlessly
- Develop features from transactional data and time series data
- Derive new features by combining existing variables
- Understand how to transform, discretize, and scale your variables
- Create informative variables from date and time
Who this book is for
This book is for machine learning professionals, AI engineers, data scientists, and NLP and reinforcement learning engineers who want to optimize and enrich their machine learning models with the best features. Knowledge of machine learning and Python coding will assist you with understanding the concepts covered in this book.
Frequently asked questions
- Essential is ideal for learners and professionals who enjoy exploring a wide range of subjects. Access the Essential Library with 800,000+ trusted titles and best-sellers across business, personal growth, and the humanities. Includes unlimited reading time and Standard Read Aloud voice.
- Complete: Perfect for advanced learners and researchers needing full, unrestricted access. Unlock 1.4M+ books across hundreds of subjects, including academic and specialized titles. The Complete Plan also includes advanced features like Premium Read Aloud and Research Assistant.
Please note we cannot support devices running on iOS 13 and Android 7 or earlier. Learn more about using the app.
Information
Encoding Categorical Variables
- Creating binary variables through one-hot encoding
- Performing one-hot encoding of frequent categories
- Replacing categories with ordinal numbers
- Replacing categories with counts or frequency of observations
- Encoding with integers in an ordered manner
- Encoding with the mean of the target
- Encoding with the Weight of Evidence
- Grouping rare or infrequent categories
- Performing binary encoding
- Performing feature hashing
Technical requirements
pip install feature-engine
pip install category_encoders
- Home page: https://www.trainindata.com/feature-engine
- GitHub: https://github.com/solegalli/feature_engine/
- Documentation: https://feature-engine.readthedocs.io
- Documentation: https://contrib.scikit-learn.org/categorical-encoding/
- Visit http://archive.ics.uci.edu/ml/machine-learning-databases/credit-screening/.
- Click on crx.data to download the data:

- Save crx.data to the folder from which you will run the following commands.
- Import the required libraries:
import random
import pandas as pd
import numpy as np
- Load the data:
data = pd.read_csv('crx.data', header=None) - Create a list with the variable names:
varnames = ['A'+str(s) for s in range(1,17)] - Add the variable names to the dataframe:
data.columns = varnames - Replace the question marks in the dataset with NumPy NaN values:
data = data.replace('?', np.nan) - Re-cast numerical variables to float types:
data['A2'] = data['A2'].astype('float')
data['A14'] = data['A14'].astype('float') - Re-code the target variable as binary:
data['A16'] = data['A16'].map({'+':1, '-':0}) - Make lists with categorical and numerical variables:
cat_cols = [c for c in data.columns if data[c].dtypes=='O']
num_cols = [c for c in data.columns if data[c].dtypes!='O']
- Fill in the missing data:
data[num_cols] = data[num_cols].fillna(0)
data[cat_cols] = data[cat_cols].fillna('Missing')
- Save the prepared data:
data.to_csv('creditApprovalUCI.csv', index=False) Creating binary variables through one-hot encoding
| Gender | Female | Male |
| Female | 1 | 0 |
| Male | 0 | 1 |
| Male | 0 | 1 |
| Female | 1 | 0 |
| Female | 1 | 0 |
- If the observation is red, it will be captured by the variable red (red = 1, blue = 0).
- If the observation is blue, it will be captured by the variable blue (red = 0, blue = 1).
- If the observation is green, it will be captured by the combination of red and blue (red = 0, blue = 0).
- When training decision trees, as they do not evaluate the entire feature space at the same time
- When selecting features recursively
- Wh...
Table of contents
- Title Page
- Copyright and Credits
- About Packt
- Contributors
- Preface
- Foreseeing Variable Problems When Building ML Models
- Imputing Missing Data
- Encoding Categorical Variables
- Transforming Numerical Variables
- Performing Variable Discretization
- Working with Outliers
- Deriving Features from Dates and Time Variables
- Performing Feature Scaling
- Applying Mathematical Computations to Features
- Creating Features with Transactional and Time Series Data
- Extracting Features from Text Variables
- Other Books You May Enjoy