Presenting Koalas, a new open source project unveiled by Databricks, that brings the simplicity of pandas to the scalability powers of Apache Spark™.
Data science with Python has exploded in popularity over the past few years and pandas has emerged as the lynchpin of the ecosystem. When data scientists get their hands on a data set, pandas is often the most common exploration tool. It is the ultimate tool for data wrangling and analysis. In fact, pandas’ read_csv is often the very first command students run in their data science journey.
The problem? pandas does not scale well to big data. It was designed for small data sets that a single machine could handle. On the other hand, Apache Spark has emerged as the de facto standard for big data workloads. Today many data scientists use pandas for coursework, and small data tasks. When they work with very large data sets, they either have to migrate their code to PySpark’s close but distinct API or downsample their data so that it fits for pandas.
Now with Koalas, data scientists get the best of both worlds and can make the transition from a single machine to a distributed environment without needing to learn a new framework.
In this latest Data Science Central webinar, the developers of Koalas will show you how:
Koalas removes the need to decide whether to use pandas or PySpark for a given data set
For work that was initially written in pandas for a single machine, Koalas allows data scientists to scale up their code on Spark by simply switching out pandas for Koalas
Koalas unlocks big data for more data scientists in an organization since they no longer need to learn PySpark to leverage Spark
Tony Liu, Product Manager, Machine Learning – Databricks
Tim Hunter, Sr. Software Engineer and Technical Lead, Co-Creator of Koalas – Databricks
Stephanie Glen, Editorial Director – Data Science Central