This blog is extrapolated from DataScience Hacks by the author himself.
Apache Spark, another apache licensed top-level project that could perform large scale data processing way faster than Hadoop (I am referring to MR1.0 here). It is possible due to Resilient Distributed Datasets concept that is behind this fast data processing. RDD is basically a collection of objects, spraed across a cluster stored in ram or disk, automatically rebuilt on failure. It is purpose is to support higher-level, parallel operations on data as straightforward as possible.
Apache Spark is often referred to as data processing engine. Simply put, Spark is cluster computing engine that made it easy to handle a wide range of workloads: ETL, SQL-like queries, machine learning and streaming. The amount of code you write is also minimized to a great extent compared to traditional mapreduce development. Also, it has been proven to be 10x faster than Apache Mahout.
The Spark engine has four major components.
To install Spark, we need the following in the OS (Mac/Debian):
Data Science Hacks