Hands-On Gradient Boosting with XGBoost and scikit-learn
eBook - ePub

Hands-On Gradient Boosting with XGBoost and scikit-learn

Perform accessible machine learning and extreme gradient boosting with Python

Corey Wade

Share book
  1. 310 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

Hands-On Gradient Boosting with XGBoost and scikit-learn

Perform accessible machine learning and extreme gradient boosting with Python

Corey Wade

Book details
Book preview
Table of contents
Citations

About This Book

Get to grips with building robust XGBoost models using Python and scikit-learn for deployment

Key Features

  • Get up and running with machine learning and understand how to boost models with XGBoost in no time
  • Build real-world machine learning pipelines and fine-tune hyperparameters to achieve optimal results
  • Discover tips and tricks and gain innovative insights from XGBoost Kaggle winners

Book Description

XGBoost is an industry-proven, open-source software library that provides a gradient boosting framework for scaling billions of data points quickly and efficiently.

The book introduces machine learning and XGBoost in scikit-learn before building up to the theory behind gradient boosting. You'll cover decision trees and analyze bagging in the machine learning context, learning hyperparameters that extend to XGBoost along the way. You'll build gradient boosting models from scratch and extend gradient boosting to big data while recognizing speed limitations using timers. Details in XGBoost are explored with a focus on speed enhancements and deriving parameters mathematically. With the help of detailed case studies, you'll practice building and fine-tuning XGBoost classifiers and regressors using scikit-learn and the original Python API. You'll leverage XGBoost hyperparameters to improve scores, correct missing values, scale imbalanced datasets, and fine-tune alternative base learners. Finally, you'll apply advanced XGBoost techniques like building non-correlated ensembles, stacking models, and preparing models for industry deployment using sparse matrices, customized transformers, and pipelines.

By the end of the book, you'll be able to build high-performing machine learning models using XGBoost with minimal errors and maximum speed.

What you will learn

  • Build gradient boosting models from scratch
  • Develop XGBoost regressors and classifiers with accuracy and speed
  • Analyze variance and bias in terms of fine-tuning XGBoost hyperparameters
  • Automatically correct missing values and scale imbalanced data
  • Apply alternative base learners like dart, linear models, and XGBoost random forests
  • Customize transformers and pipelines to deploy XGBoost models
  • Build non-correlated ensembles and stack XGBoost models to increase accuracy

Who this book is for

This book is for data science professionals and enthusiasts, data analysts, and developers who want to build fast and accurate machine learning models that scale with big data. Proficiency in Python, along with a basic understanding of linear algebra, will help you to get the most out of this book.

Frequently asked questions

How do I cancel my subscription?
Simply head over to the account section in settings and click on “Cancel Subscription” - it’s as simple as that. After you cancel, your membership will stay active for the remainder of the time you’ve paid for. Learn more here.
Can/how do I download books?
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
What is the difference between the pricing plans?
Both plans give you full access to the library and all of Perlego’s features. The only differences are the price and subscription period: With the annual plan you’ll save around 30% compared to 12 months on the monthly plan.
What is Perlego?
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Do you support text-to-speech?
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Is Hands-On Gradient Boosting with XGBoost and scikit-learn an online PDF/ePUB?
Yes, you can access Hands-On Gradient Boosting with XGBoost and scikit-learn by Corey Wade in PDF and/or ePUB format, as well as other popular books in Computer Science & Neural Networks. We have over one million books available in our catalogue for you to explore.

Information

Year
2020
ISBN
9781839213809
Edition
1

Section 1: Bagging and Boosting

An XGBoost model using scikit-learn defaults opens the book after preprocessing data with pandas and building standard regression and classification models. The practical theory behind XGBoost is explored by advancing through decision trees (XGBoost base learners), random forests (bagging), and gradient boosting to compare scores and fine-tune ensemble and tree-based hyperparameters.
This section comprises the following chapters:
  • Chapter 1, Machine Learning Landscape
  • Chapter 2, Decision Trees in Depth
  • Chapter 3, Bagging with Random Forests
  • Chapter 4, From Gradient Boosting to XGBoost

Chapter 1: Machine Learning Landscape

Welcome to Hands-On Gradient Boosting with XGBoost and Scikit-Learn, a book that will teach you the foundations, tips, and tricks of XGBoost, the best machine learning algorithm for making predictions from tabular data.
The focus of this book is XGBoost, also known as Extreme Gradient Boosting. The structure, function, and raw power of XGBoost will be fleshed out in increasing detail in each chapter. The chapters unfold to tell an incredible story: the story of XGBoost. By the end of this book, you will be an expert in leveraging XGBoost to make predictions from real data.
In the first chapter, XGBoost is presented in a sneak preview. It makes a guest appearance in the larger context of machine learning regression and classification to set the stage for what's to come.
This chapter focuses on preparing data for machine learning, a process also known as data wrangling. In addition to building machine learning models, you will learn about using efficient Python code to load data, describe data, handle null values, transform data into numerical columns, split data into training and test sets, build machine learning models, and implement cross-validation, as well as comparing linear regression and logistic regression models with XGBoost.
The concepts and libraries presented in this chapter are used throughout the book.
This chapter consists of the following topics:
  • Previewing XGBoost
  • Wrangling data
  • Predicting regression
  • Predicting classification

Previewing XGBoost

Machine learning gained recognition with the first neural network in the 1940s, followed by the first machine learning checker champion in the 1950s. After some quiet decades, the field of machine learning took off when Deep Blue famously beat world chess champion Gary Kasparov in the 1990s. With a surge in computational power, the 1990s and early 2000s produced a plethora of academic papers revealing new machine learning algorithms such as random forests and AdaBoost.
The general idea behind boosting is to transform weak learners into strong learners by iteratively improving upon errors. The key idea behind gradient boosting is to use gradient descent to minimize the errors of the residuals. This evolutionary strand, from standard machine learning algorithms to gradient boosting, is the focus of the first four chapters of this book.
XGBoost is short for Extreme Gradient Boosting. The Extreme part refers to pushing the limits of computation to achieve gains in accuracy and speed. XGBoost's surging popularity is largely due to its unparalleled success in Kaggle competitions. In Kaggle competitions, competitors build machine learning models in attempts to make the best predictions and win lucrative cash prizes. In comparison to other models, XGBoost has been crushing the competition.
Understanding the details of XGBoost requires understanding the landscape of machine learning within the context of gradient boosting. In order to paint a full picture, we start at the beginning, with the basics of machine learning.

What is machine learning?

Machine learning is the ability of computers to learn from data. In 2020, machine learning predicts human behavior, recommends products, identifies faces, outperforms poker professionals, discovers exoplanets, identifies diseases, operates self-driving cars, personalizes the internet, and communicates directly with humans. Machine learning is leading the artificial intelligence revolution and affecting the bottom line of nearly every major corporation.
In practice, machine learning means implementing computer algorithms whose weights are adjusted when new data comes in. Machine learning algorithms learn from datasets to make predictions about species classification, the stock market, company profits, human decisions, subatomic particles, optimal traffic routes, and more.
Machine learning is the best tool at our disposal for transforming big data into accurate, actionable predictions. Machine learning, however, does not occur in a vacuum. Machine learning requires rows and columns of data.

Data wrangling

Data wrangling is a comprehensive term that encompasses the various stages of data preprocessing before machine learning can begin. Data loading, data cleaning, data analysis, and data manipulation are all included within the sphere of data wrangli...

Table of contents