Skip to content

Distributed training

Tatiana Likhomanenko edited this page Dec 10, 2019 · 1 revision

wav2letter++ supports distributed training on multiple GPUs out of the box. To run on multiple GPUs set pass the flag -enable_distributed true and run with MPI:

mpirun -n 8 <train_cpp_binary> [train|continue|fork] \
-enable_distributed true \
<... other flags ..>

The above command will run data parallel training with 8 processes (e.g. on 8 GPUs).