-
Notifications
You must be signed in to change notification settings - Fork 45
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Run locally on multiple GPUs #29
Comments
Hi @maximotus Could you make the question more clear? If you are referring to running "inference", then you don't need to parallelism at all. If you mean running a "training" job, we use |
sure. I was trying to run your demo script. But if I pass the cuda flag, so the command is like However, I thought enabling parallelism could solve this since I have 4 GPUs with 10 GB each available. So I wonder now how I can manage to run the inference on more than one GPU so I will not get a I tried out wrapping your model with So I was thinking about adapting your code for these needs (so e.g. the custom methods like However, I thought it would be good asking you about this issue first since I may have overseen a more trivial solution. Cheers, |
Hi @maximotus did you figure this out? You will need to use at least a 20 GB GPU. If not, I think that the issue is due to model parallelism. Look into https://www.deepspeed.ai/tutorials/pipeline/ for model parallelism. |
Hello,
great work! What are the minimal adaptions I need to apply to the code so I can run the narrator on multiple GPUs locally?
nn.DataParallel
is not optimal since I would need to adapt the model classes.Cheers,
Max
The text was updated successfully, but these errors were encountered: