
Mastering Large Datasets with Python
Parallelize and Distribute Your Python Code
- 312 pages
- English
- ePUB (mobile friendly)
- Available on iOS & Android
About this book
Summary
Modern data science solutions need to be clean, easy to read, and scalable. In Mastering Large Datasets with Python, author J.T. Wolohan teaches you how to take a small project and scale it up using a functionally influenced approach to Python coding. You'll explore methods and built-in Python tools that lend themselves to clarity and scalability, like the high-performing parallelism method, as well as distributed technologies that allow for high data throughput. The abundant hands-on exercises in this practical tutorial will lock in these essential skills for any large-scale data science project.Purchase of the print book includes a free eBook in PDF, Kindle, and ePub formats from Manning Publications. About the technology
Programming techniques that work well on laptop-sized data can slow to a crawl—or fail altogether—when applied to massive files or distributed datasets. By mastering the powerful map and reduce paradigm, along with the Python-based tools that support it, you can write data-centric applications that scale efficiently without requiring codebase rewrites as your requirements change. About the book
Mastering Large Datasets with Python teaches you to write code that can handle datasets of any size. You'll start with laptop-sized datasets that teach you to parallelize data analysis by breaking large tasks into smaller ones that can run simultaneously. You'll then scale those same programs to industrial-sized datasets on a cluster of cloud servers. With the map and reduce paradigm firmly in place, you'll explore tools like Hadoop and PySpark to efficiently process massive distributed datasets, speed up decision-making with machine learning, and simplify your data storage with AWS S3. What's inside
- An introduction to the map and reduce paradigm
- Parallelization with the multiprocessing module and pathos framework
- Hadoop and Spark for distributed computing
- Running AWS jobs to process large datasets
About the reader
For Python programmers who need to work faster with more data. About the author
J. T. Wolohan is a lead data scientist at Booz Allen Hamilton, and a PhD researcher at Indiana University, Bloomington.Table of Contents: PART 11 ¦ Introduction2 ¦ Accelerating large dataset work: Map and parallel computing3 ¦ Function pipelines for mapping complex transformations4 ¦ Processing large datasets with lazy workflows5 ¦ Accumulation operations with reduce6 ¦ Speeding up map and reduce with advanced parallelizationPART 27 ¦ Processing truly big datasets with Hadoop and Spark8 ¦ Best practices for large data with Apache Streaming and mrjob9 ¦ PageRank with map and reduce in PySpark10 ¦ Faster decision-making with machine learning and PySparkPART 311 ¦ Large datasets in the cloud with Amazon Web Services and S312 ¦ MapReduce in the cloud with Amazon's Elastic MapReduce
Frequently asked questions
- Essential is ideal for learners and professionals who enjoy exploring a wide range of subjects. Access the Essential Library with 800,000+ trusted titles and best-sellers across business, personal growth, and the humanities. Includes unlimited reading time and Standard Read Aloud voice.
- Complete: Perfect for advanced learners and researchers needing full, unrestricted access. Unlock 1.4M+ books across hundreds of subjects, including academic and specialized titles. The Complete Plan also includes advanced features like Premium Read Aloud and Research Assistant.
Please note we cannot support devices running on iOS 13 and Android 7 or earlier. Learn more about using the app.
Information
Table of contents
- Copyright
- Brief Table of Contents
- Table of Contents
- Preface
- Acknowledgments
- About this book
- About the author
- About the cover illustration
- Part 1.
- Part 2.
- Part 3.
- Index
- List of Figures
- List of Tables
- List of Listings