Data in all domains is getting bigger. How can you workwith it efficiently? Recently updatedfor Spark 1.3, this book introduces Apache Spark, the open source clustercomputing system that makes data analytics fast to write and fast to run. WithSpark, you can tackle big datasets quickly through simple APIs in Python, Java,and Scala. This edition includes new information on Spark SQL, Spark Streaming,setup, and Maven coordinates.
Written by the developers of Spark, this book will havedata scientists and engineers up and running in no time. You’ll learn how toexpress parallel jobs with just a few lines of code, and cover applicationsfrom simple batch jobs to stream processing and machine learning.
- Quickly dive into Spark capabilities such as distributed datasets, in-memory caching, and the interactive shell
- Leverage Spark’s powerful built-in libraries, including Spark SQL, Spark Streaming, and MLlib
- Use one programming paradigm instead of mixing and matching tools like Hive, Hadoop, Mahout, and Storm
- Learn how to deploy interactive, batch, and streaming applications
- Connect to data sources including HDFS, Hive, JSON, and S3
- Master advanced topics like data partitioning and shared variables
No posts found