Hands-On Machine Learning with R
eBook - ePub

Hands-On Machine Learning with R

  1. 456 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

Hands-On Machine Learning with R

About this book

Hands-on Machine Learning with R provides a practical and applied approach to learning and developing intuition into today's most popular machine learning methods. This book serves as a practitioner's guide to the machine learning process and is meant to help the reader learn to apply the machine learning stack within R, which includes using various R packages such as glmnet, h2o, ranger, xgboost, keras, and others to effectively model and gain insight from their data. The book favors a hands-on approach, providing an intuitive understanding of machine learning concepts through concrete examples and just a little bit of theory.

Throughout this book, the reader will be exposed to the entire machine learning process including feature engineering, resampling, hyperparameter tuning, model evaluation, and interpretation. The reader will be exposed to powerful algorithms such as regularized regression, random forests, gradient boosting machines, deep learning, generalized low rank models, and more! By favoring a hands-on approach and using real word data, the reader will gain an intuitive understanding of the architectures and engines that drive these algorithms and packages, understand when and how to tune the various hyperparameters, and be able to interpret model results. By the end of this book, the reader should have a firm grasp of R's machine learning stack and be able to implement a systematic approach for producing high quality modeling results.

Features:

¡ Offers a practical and applied introduction to the most popular machine learning methods.

¡ Topics covered include feature engineering, resampling, deep learning and more.

¡ Uses a hands-on approach and real world data.

Frequently asked questions

Yes, you can cancel anytime from the Subscription tab in your account settings on the Perlego website. Your subscription will stay active until the end of your current billing period. Learn how to cancel your subscription.
No, books cannot be downloaded as external files, such as PDFs, for use outside of Perlego. However, you can download books within the Perlego app for offline reading on mobile or tablet. Learn more here.
Perlego offers two plans: Essential and Complete
  • Essential is ideal for learners and professionals who enjoy exploring a wide range of subjects. Access the Essential Library with 800,000+ trusted titles and best-sellers across business, personal growth, and the humanities. Includes unlimited reading time and Standard Read Aloud voice.
  • Complete: Perfect for advanced learners and researchers needing full, unrestricted access. Unlock 1.4M+ books across hundreds of subjects, including academic and specialized titles. The Complete Plan also includes advanced features like Premium Read Aloud and Research Assistant.
Both plans are available with monthly, semester, or annual billing cycles.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes! You can use the Perlego app on both iOS or Android devices to read anytime, anywhere — even offline. Perfect for commutes or when you’re on the go.
Please note we cannot support devices running on iOS 13 and Android 7 or earlier. Learn more about using the app.
Yes, you can access Hands-On Machine Learning with R by Brad Boehmke,Brandon M. Greenwell in PDF and/or ePUB format, as well as other popular books in Economics & Statistics for Business & Economics. We have over one million books available in our catalogue for you to explore.

Part II

Supervised Learning

4

Linear Regression

Linear regression, a staple of classical statistical modeling, is one of the simplest algorithms for doing supervised learning. Though it may seem somewhat dull compared to some of the more modern statistical learning approaches described in later chapters, linear regression is still a useful and widely applied statistical learning method. Moreover, it serves as a good starting point for more advanced approaches; as we will see in later chapters, many of the more sophisticated statistical learning approaches can be seen as generalizations to or extensions of ordinary linear regression. Consequently, it is important to have a good understanding of linear regression before studying more complex learning methods. This chapter introduces linear regression with an emphasis on prediction, rather than inference. An excellent and comprehensive overview of linear regression is provided in Kutner et al. (2005). See Faraway (2016b) for a discussion of linear regression in R (the book’s website also provides Python scripts).

4.1 Prerequisites

This chapter leverages the following packages:
# Helper packages
library(dplyr) # for data manipulation
library(ggplot2) # for awesome graphics
# Modeling packages
library(caret) # for cross-validation, etc.
# Model interpretability packages
library(vip) # variable importance
We’ll also continue working with the ames_train data set created in Section 2.7.

4.2 Simple linear regression

Pearson’s correlation coefficient is often used to quantify the strength of the linear association between two continuous variables. In this section, we seek to fully characterize that linear relationship. Simple linear regression (SLR) assumes that the statistical relationship between two continuous variables (say X and Y) is (at least approximately) linear:
Yi=β0+β1Xi+ϵi, for i=1,2,…,n,
(4.1)
where Yi represents the i-th response value, Xi represents the i-th feature value, β0 and β1 are fixed, but unknown constants (commonly referred to as coefficients or parameters) that represent the intercept and slope of the regression line, respectively, and ϵi represents noise or random error. In this chapter, we’ll assume that the errors are normally distributed with mean zero and constant variance σ2, denoted ∼iid (0, σ2). Since the random errors are centered around zero (i.e., E (ϵ) = 0), linear regression is really a problem of estimating a conditional mean:
E(Yi|Xi)=β0+β1Xi.
(4.2)
For brevity, we often drop the conditional piece and write E (Y|X) = E (Y). Consequently, the interpretation of the coefficients is in terms of the average, or mean response. For example, the intercept β0 represents the average response value when X = 0 (it is often not meaningful or of interest and is sometimes referred to as a bias term). The slope β1 represents the increase in the average response per one-unit increase in X (i.e., it is a rate of change).

4.2.1 Estimation

Ideally, we want estimates of β0 and β1 that give us the “best fitting” line. But what is meant by “best fitting”? The most common approach is to use the method of least squares (LS) estimation; this form of linear regression is often referred to as ordinary least squares (OLS) regression. There are multiple ways to measure “best fitting”, but the LS criterion finds the “best fitting” line by minimizing the residual sum of squares (RSS):
RSS(β0,β1)=∑i=1n[Yi−(β0+β1Xi)]2=∑i=1n(Yi−β0−β1Xi)2.
(4.3)
The LS estimates of β0 and β1 are denoted as β̂0 and β̂1, respectively. Once obtained, we can generate predicted values, say at X = Xnew, using the estimated regression equation:
Y^new=β^0+β^1Xnew,
(4.4)
where Y^new=E(Ynew|X⌢=Xnew) is the estimated mean response at X = Xnew.
With the Ames housing data, suppose we wanted to model a linear relationship between the total above ground living space of a home (Gr_Liv_Area) and sale price (Sale_Price). To perform an OLS regression model in R we can use the lm() function:
model1 <- lm(Sale_Price ~ Gr_Liv_Area, data = ames_train)
The fitted model (model1) is displayed in the left plot in Figure 4.1 where the points represent the values of Sale_Price in the training data. In the right plot of Figure 4.1, the vertical lines represent the individual errors, called residuals, associated with each observation. The OLS criterion in Equation (4.3) identifies the “best fitting” line that minimizes the sum of squares of these residuals.
Image
FIGURE 4.1: The least squares fit from regressing sale price on living space for the the Ames housing data. Left: Fitted regressi...

Table of contents

  1. Cover
  2. Half Title
  3. Series Page
  4. Title Page
  5. Copyright Page
  6. Dedication
  7. Table of Contents
  8. Preface
  9. I Fundamentals
  10. II Supervised Learning
  11. III Dimension Reduction
  12. IV Clustering
  13. Bibliography
  14. Index