Data Analytics favorite Apache Spark, is progressing as a reference standard for Big Data, and a “fast and general engine for large-scale data processing”. In our previous post, we detailed how to expand ML tools using a PySpark kernel and leverage the Jupyter notebook interactive interface, to develop and practice with Python. In this post, We'll describe how to leverage Apache Toree multi-interpreter and use not just Python but Scala, R and and SQL as well. The Github documentation for this project in incubator mode can be both cryptic and overwhelming, but the results are indeed encouraging. You can read more and follow the process here.