Workshop on scalable and distributed machine learning using Ray

Presentation
Code

As available compute capacity for training AI is rapidly increasing the question of how to best use it as a researcher arises. While distributing the training of a single model is supported by frameworks like Pytorch, it can be difficult to scale this up in clinical applications due to limited datasets. In this workshop we will instead explain how the Ray toolkit can be used to scale up the number of models we train with the goal of improving statistical significance of our deep learning results in training processes like cross validation and hyper parameter tuning.

Who Should Attend: Attendees should understand machine learning concepts, particularly distributed training and hyperparameter tuning, be proficient in Python, and have basic knowledge of cloud computing.