Deep learning model performance is known for scaling well with data size, but training these models can be notoriously time-consuming. As more companies adopt deep learning, the need for using distributed deep learning frameworks becomes more important than ever.
In this webinar, we’ll share:
How distributed deep learning works and give you an overview of the different frameworks including TensorFlow, Keras and Pytorch.
How Databricks is making it easy for data scientists to migrate their single-machine workloads to distributed workloads, at all stages of a deep learning project.
A demo of distributed deep learning training using our newly released feature, HorovodRunner.