The distributed run feature is particularly useful for doing distributed deep learning. Distributed training runs your code across multiple machines in parallel to get you results even faster.
When you write code for distributed training using the Horovod framework, which works with TensorFlow, Keras, PyTorch, and MXNET, we can easily send your code to run across however many machines you want. All you need to do is add the --distributed parameter to your spell run command.
For more info, take a look at our in-depth guide here.