PySpark Cookbook
eBook - ePub

PySpark Cookbook

Over 60 recipes for implementing big data processing and analytics using Apache Spark and Python

Denny Lee, Tomasz Drabas

Share book
  1. 330 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

PySpark Cookbook

Over 60 recipes for implementing big data processing and analytics using Apache Spark and Python

Denny Lee, Tomasz Drabas

Book details
Book preview
Table of contents
Citations

About This Book

Combine the power of Apache Spark and Python to build effective big data applicationsAbout This Book• Perform effective data processing, machine learning, and analytics using PySpark• Overcome challenges in developing and deploying Spark solutions using Python• Explore recipes for efficiently combining Python and Apache Spark to process dataWho This Book Is ForThe PySpark Cookbook is for you if you are a Python developer looking for hands-on recipes for using the Apache Spark 2.x ecosystem in the best possible way. A thorough understanding of Python (and some familiarity with Spark) will help you get the best out of the book.What You Will Learn• Configure a local instance of PySpark in a virtual environment • Install and configure Jupyter in local and multi-node environments• Create DataFrames from JSON and a dictionary using pyspark.sql• Explore regression and clustering models available in the ML module• Use DataFrames to transform data used for modeling• Connect to PubNub and perform aggregations on streamsIn DetailApache Spark is an open source framework for efficient cluster computing with a strong interface for data parallelism and fault tolerance. The PySpark Cookbook presents effective and time-saving recipes for leveraging the power of Python and putting it to use in the Spark ecosystem.You'll start by learning the Apache Spark architecture and how to set up a Python environment for Spark. You'll then get familiar with the modules available in PySpark and start using them effortlessly. In addition to this, you'll discover how to abstract data with RDDs and DataFrames, and understand the streaming capabilities of PySpark. You'll then move on to using ML and MLlib in order to solve any problems related to the machine learning capabilities of PySpark and use GraphFrames to solve graph-processing problems. Finally, you will explore how to deploy your applications to the cloud using the spark-submit command.By the end of this book, you will be able to use the Python API for Apache Spark to solve any problems associated with building data-intensive applications.Style and approachThis book is a rich collection of recipes that will come in handy when you are working with PySparkAddressing your common and not-so-common pain points, this is a book that you must have on the shelf.

Frequently asked questions

How do I cancel my subscription?
Simply head over to the account section in settings and click on “Cancel Subscription” - it’s as simple as that. After you cancel, your membership will stay active for the remainder of the time you’ve paid for. Learn more here.
Can/how do I download books?
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
What is the difference between the pricing plans?
Both plans give you full access to the library and all of Perlego’s features. The only differences are the price and subscription period: With the annual plan you’ll save around 30% compared to 12 months on the monthly plan.
What is Perlego?
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Do you support text-to-speech?
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Is PySpark Cookbook an online PDF/ePUB?
Yes, you can access PySpark Cookbook by Denny Lee, Tomasz Drabas in PDF and/or ePUB format, as well as other popular books in Computer Science & Data Processing. We have over one million books available in our catalogue for you to explore.

Information

Year
2018
ISBN
9781788834254
Edition
1

Installing and Configuring Spark

In this chapter, we will cover how to install and configure Spark, either as a local instance, a multi-node cluster, or in a virtual environment. You will learn the following recipes:
  • Installing Spark requirements
  • Installing Spark from sources
  • Installing Spark from binaries
  • Configuring a local instance of Spark
  • Configuring a multi-node instance of Spark
  • Installing Jupyter
  • Configuring a session in Jupyter
  • Working with Cloudera Spark images

Introduction

We cannot begin a book on Spark (well, on PySpark) without first specifying what Spark is. Spark is a powerful, flexible, open source, data processing and querying engine. It is extremely easy to use and provides the means to solve a huge variety of problems, ranging from processing unstructured, semi-structured, or structured data, through streaming, up to machine learning. With over 1,000 contributors from over 250 organizations (not to mention over 3,000 Spark Meetup community members worldwide), Spark is now one of the largest open source projects in the portfolio of the Apache Software Foundation.
The origins of Spark can be found in 2012 when it was first released; Matei Zacharia developed the first versions of the Spark processing engine at UC Berkeley as part of his PhD thesis. Since then, Spark has become extremely popular, and its popularity stems from a number of reasons:
  • It is fast: It is estimated that Spark is 100 times faster than Hadoop when working purely in memory, and around 10 times faster when reading or writing data to a disk.
  • It is flexible: You can leverage the power of Spark from a number of programming languages; Spark natively supports interfaces in Scala, Java, Python, and R.
  • It is extendible: As Spark is an open source package, you can easily extend it by introducing your own classes or extending the existing ones.
  • It is powerful: Many machine learning algorithms are already implemented in Spark so you do not need to add more tools to your stack—most of the data engineering and data science tasks can be accomplished while working in a single environment.
  • It is familiar: Data scientists and data engineers, who are accustomed to using Python's pandas, or R's data.frames or data.tables, should have a much gentler learning curve (although the differences between these data types exist). Moreover, if you know SQL, you can also use it to wrangle data in Spark!
  • It is scalable: Spark can run locally on your machine (with all the limitations such a solution entails). However, the same code that runs locally can be deployed to a cluster of thousands of machines with little-to-no changes.
For the remainder of this book, we will assume that you are working in a Unix-like environment such as Linux (throughout this book, we will use Ubuntu Server 16.04 LTS) or macOS (running macOS High Sierra); all the code provided has been tested in these two environments. For this chapter (and some other ones, too), an internet connection is also required as we will be downloading a bunch of binaries and sources from the internet.
We will not be focusing on installing Spark in a Windows environment as it is not truly supported by the Spark developers. However, if you are inclined to try, you can follow some of the instructions you will find online, such as from the following link: http://bit.ly/2Ar75ld.
Knowing how to use the command line and how to set some environment variables on your system is useful, but not really required—we will guide you through the steps.

Installing Spark requirements

Spark requires a handful of environments to be present on your machine before you can install and use it. In this recipe, we will focus on getting your machine ready for Spark installation.

Getting ready

To execute this recipe, you will need a bash Terminal and an internet connection.
Also, before we start any work, you should clone the GitHub repository for this book. The repository contains all the codes (in the form of notebooks) and all the data you will need to follow the examples in this book. To clone the repository, go to http://bit.ly/2ArlBck, click on the Clone or download button, and copy the URL that shows up by clicking on the icon next to it:
Next, go to your Terminal and issue the following command:
git clone [email protected]:drabastomek/PySparkCookbook.git
If your git environment is set up properly, the whole GitHub repository should clone to your disk. No other prerequisites are required.

How to do it...

There are just truly two main requirements for installing PySpark: Java and Python. Additionally, you can also install Scala and R if you want to use those languages, and we will also check for Maven, which we will use to compile the Spark sources.
To do this, we will use the checkRequirements.sh script to check for all the requirements: the script is located in the Chapter01 folder from the GitHub repository.
The following code block shows the high-level portions of the script found in the Chapter01/checkRequirements.sh file. Note that some portions of the code were omitted here for brevity:
#!/bin/bash

# Shell script for checking the dependencies
#
# PySpark Cookbook
# Author: Tomasz Drabas, Denny Lee
# Version: 0.1
# Date: 12/2/2017

_java_required=1.8
_python_required=3.4
_r_required=3.1
_scala_required=2.11
_mvn_required=3.3.9

# parse command line arguments
_args_len="$#"
...

printHeader
checkJava
checkPython

if [ "${_check_R_req}" = "true" ]; then
checkR
fi

if [ "${_check_Scala_req}" = "true" ]; then
checkScala
fi

if [ "${_check_Maven_req}" = "true" ]; then
checkMaven
fi

How it works...

First, we will specify all the required packages and their required minimum versions; looking at the preceding code, you can see that Spark 2.3.1 requires Java 1.8+ and Python 3.4 or higher (and we will always be checking for these two environments). Additionally, if you want to use R or Scala, the minimal requirements for these two packages are 3.1 and 2.11, respectively. Maven, as mentioned earlier, will be used to compile the Spark sources, and for doing that, Spark requires at least the 3.3.9 version of Maven.
You can check the Spark requirements here: https://spark.apache.org/docs/latest/index.html
You can check the requirements for building Spark here: https://spark.apache.org/docs/latest/building-spark.html.
Next, we parse the command-line arguments:
if [ "$_args_len" -ge 0 ]; then
while [[ "$#" -gt 0 ]]
do
key="$1"
case $key in
-m|--Maven)
_check_Maven_req="true"
shift # past argument
;;
-r|--R)
_check_R_req="true"
shift # past argument
;;
-s|--Scala)
_check_Scala_req="true"
shift # past argument
;;
*)
shift # past argument
esac
done
fi
You, as a user, can specify whether you want to check additionally for R, Scala, and Maven dependencies. To do so, run the following code from your command line (the following code will check for all of them):
./checkRequirements.sh -s -m -r
The following is also a perfectly vali...

Table of contents