All Videos Tagged API (Data Science Central) - Data Science Central 2019-12-09T19:38:06Z https://www.datasciencecentral.com/video/video/listTagged?tag=API&rss=yes&xn_auth=no DSC Webinar Series: From Pandas to Apache Spark™ tag:www.datasciencecentral.com,2019-07-03:6448529:Video:851584 2019-07-03T19:18:10.864Z Tim Matteson https://www.datasciencecentral.com/profile/2edcolrgc4o4b <a href="https://www.datasciencecentral.com/video/dsc-webinar-series-from-pandas-to-apache-spark"><br /> <img alt="Thumbnail" height="135" src="https://storage.ning.com/topology/rest/1.0/file/get/3189210470?profile=original&amp;width=240&amp;height=135" width="240"></img><br /> </a> <br></br>***Please be aware there is a slight audio issue from approximately 10:45-13:00 in the recording***<br></br> <br></br> Presenting Koalas, a new open source project unveiled by Databricks, that brings the simplicity of pandas to the scalability powers of Apache Spark™.<br></br> <br></br> Data science with Python has exploded in popularity over the past few years and… <a href="https://www.datasciencecentral.com/video/dsc-webinar-series-from-pandas-to-apache-spark"><br /> <img src="https://storage.ning.com/topology/rest/1.0/file/get/3189210470?profile=original&amp;width=240&amp;height=135" width="240" height="135" alt="Thumbnail" /><br /> </a><br />***Please be aware there is a slight audio issue from approximately 10:45-13:00 in the recording***<br /> <br /> Presenting Koalas, a new open source project unveiled by Databricks, that brings the simplicity of pandas to the scalability powers of Apache Spark™.<br /> <br /> Data science with Python has exploded in popularity over the past few years and pandas has emerged as the lynchpin of the ecosystem. When data scientists get their hands on a data set, pandas is often the most common exploration tool. It is the ultimate tool for data wrangling and analysis. In fact, pandas’ read_csv is often the very first command students run in their data science journey.<br /> <br /> The problem? pandas does not scale well to big data. It was designed for small data sets that a single machine could handle. On the other hand, Apache Spark has emerged as the de facto standard for big data workloads. Today many data scientists use pandas for coursework, and small data tasks. When they work with very large data sets, they either have to migrate their code to PySpark's close but distinct API or downsample their data so that it fits for pandas.<br /> <br /> Now with Koalas, data scientists get the best of both worlds and can make the transition from a single machine to a distributed environment without needing to learn a new framework.<br /> <br /> In this latest Data Science Central webinar, the developers of Koalas will show you how:<br /> <br /> Koalas removes the need to decide whether to use pandas or PySpark for a given data set<br /> For work that was initially written in pandas for a single machine, Koalas allows data scientists to scale up their code on Spark by simply switching out pandas for Koalas<br /> Koalas unlocks big data for more data scientists in an organization since they no longer need to learn PySpark to leverage Spark<br /> <br /> Speakers:<br /> Tony Liu, Product Manager, Machine Learning - Databricks<br /> Tim Hunter, Sr. Software Engineer and Technical Lead, Co-Creator of Koalas - Databricks<br /> <br /> Hosted by:<br /> Stephanie Glen, Editorial Director - Data Science Central Parallelize R Code Using Apache® Spark™ tag:www.datasciencecentral.com,2017-08-15:6448529:Video:607234 2017-08-15T23:37:42.031Z Tim Matteson https://www.datasciencecentral.com/profile/2edcolrgc4o4b <a href="https://www.datasciencecentral.com/video/parallelize-r-code-using-apache-spark"><br /> <img alt="Thumbnail" height="135" src="https://storage.ning.com/topology/rest/1.0/file/get/2781530416?profile=original&amp;width=240&amp;height=135" width="240"></img><br /> </a> <br></br>R is the latest language added to Apache Spark, and the SparkR API is slightly different from PySpark. SparkR’s evolving interface to Apache Spark offers a wide range of APIs and capabilities to Data Scientists and Statisticians. With the release of Spark 2.0, and subsequent releases, the R API officially supports executing user code on distributed data. This is… <a href="https://www.datasciencecentral.com/video/parallelize-r-code-using-apache-spark"><br /> <img src="https://storage.ning.com/topology/rest/1.0/file/get/2781530416?profile=original&amp;width=240&amp;height=135" width="240" height="135" alt="Thumbnail" /><br /> </a><br />R is the latest language added to Apache Spark, and the SparkR API is slightly different from PySpark. SparkR’s evolving interface to Apache Spark offers a wide range of APIs and capabilities to Data Scientists and Statisticians. With the release of Spark 2.0, and subsequent releases, the R API officially supports executing user code on distributed data. This is done primarily through a family of apply() functions.<br /> <br /> In this Data Science Central webinar, we will explore the following:<br /> <br /> ●Provide an overview of this new functionality in SparkR.<br /> <br /> ●Show how to use this API with some changes to regular code with dapply().<br /> <br /> ●Focus on how to correctly use this API to parallelize existing R packages.<br /> <br /> ●Consider performance and examine correctness when using the apply family of functions in SparkR.<br /> <br /> Speaker: Hossein Falaki, Software Engineer -- Databricks Inc.<br /> <br /> Hosted by: Bill Vorhies, Editorial Director -- Data Science Central