Skip to content

Pytorch model training using Distributed Data Parallel module

License

Notifications You must be signed in to change notification settings

matejgrcic/DDP-example

Repository files navigation

DDP-example

Distributed training example.

Requirements

  • Python 3.8.6
pip install -r requirements.txt

Training

Single GPU training:

CUDA_VISIBLE_DEVICES=0 python train.py

Distributed training using two GPUs:

CUDA_VISIBLE_DEVICES=0,1 python train_ddp.py -g 2

Distributed training using two GPUs with Mixed Precision:

CUDA_VISIBLE_DEVICES=0,1 python train_ddp_mp.py -g 2

About

Pytorch model training using Distributed Data Parallel module

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages