Spark’s primary abstraction is a distributed collection of items called a Resilient Distributed Dataset (RDD). It is a fault-tolerant collection of elements which allows parallel operations upon itself. RDDs can be created from Hadoop InputFormats (such as HDFS files) or by transforming other RDDs. Creating RDD Spark provides two ways to create RDDs: loading an external dataset and parallelizin…
Most Popular Content on DSC
To not miss this type of content in the future, subscribe to our newsletter.
Articles from top bloggers
Other popular resources
Most popular articles