Mastering Large Datasets with Python:

Parallelize and Distribute Your Python Code (Paperback)

Author: John T.Wolohan

John T.Wolohan (Author)
Visit Author Page
Books by him and info about author and more.

Are you a Author?
Learn more here

Save 10%
MRP: 4,27914
You Pay: 3,85123
You save: 427.91
Leadtime to ship in days (default): Usually ships in 25 days
Reward points: 39 points
Our advantages
  • — SMS notification
  • — Return and exchange
  • — Different payment methods
  • — Best price
  • — Personalised Service
AuthorJohn T.Wolohan Leadtime to ship in days (default)Usually ships in 25 days

About the technology

Programming techniques that work well on laptop-sized data can slow to a crawl—or fail altogether—when applied to massive files or distributed datasets. By mastering the powerful map and reduce paradigm, along with the Python-based tools that support it, you can write data-centric applications that scale efficiently without requiring codebase rewrites as your requirements change.

About the book

Mastering Large Datasets with Python teaches you to write code that can handle datasets of any size. You’ll start with laptop-sized datasets that teach you to parallelize data analysis by breaking large tasks into smaller ones that can run simultaneously. You’ll then scale those same programs to industrial-sized datasets on a cluster of cloud servers. With the map and reduce paradigm firmly in place, you’ll explore tools like Hadoop and PySpark to efficiently process massive distributed datasets, speed up decision-making with machine learning, and simplify your data storage with AWS S3.

What's inside

  • An introduction to the map and reduce paradigm
  • Parallelization with the multiprocessing module and pathos framework
  • Hadoop and Spark for distributed computing
  • Running AWS jobs to process large datasets

About the reader

For Python programmers who need to work faster with more data.

About the author

J. T. Wolohan is a lead data scientist at Booz Allen Hamilton, and a PhD researcher at Indiana University, Bloomington.

Table of Contents:


1  Introduction

2  Accelerating large dataset work: Map and parallel computing

3  Function pipelines for mapping complex transformations

4  Processing large datasets with lazy workflows

5  Accumulation operations with reduce

6  Speeding up map and reduce with advanced parallelization


7  Processing truly big datasets with Hadoop and Spark

8  Best practices for large data with Apache Streaming and mrjob

9  PageRank with map and reduce in PySpark

10  Faster decision-making with machine learning and PySpark


11  Large datasets in the cloud with Amazon Web Services and S3

12  MapReduce in the cloud with Amazon’s Elastic MapReduce

John T.Wolohan
Condition Type
Country Origin
Gift Wrap
Leadtime to ship in days (default)
Usually ships in 25 days
Manning Publications
Find similar

No posts found

Have you used the product?

Tell us something about it and help others to make the right decision

Write a review
Possibly you may be interested
  • Forthcoming/Pre-Order
  • Bestsellers
  • Recently Viewed
Fast and high quality delivery

Our company makes delivery all over the country

Quality assurance and service

We offer only those goods, in which quality we are sure

Returns within 30 days

You have 30 days to test your purchase