-
Notifications
You must be signed in to change notification settings - Fork 69
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
I think there are bugs in train_pose.py #10
Comments
If you use one gpu to train and change line 49 "model = torch.nn.DataParallel(model, device_ids=args.gpu).cuda()" to "model = model.cuda()", you're right. Because of torch.nn.DataParallel, it will change "model" to "model.module" and it's the reason that when run test, I will use k[7:] to get the key. If you use the one gpu, you also need to change the test script |
Thank you for your replying. Yes, I use one gpu to train. |
No, I think. If you find something error, please let me know. Thank you. |
Thank you ! |
@tugumi911 you are right! |
If we want to use only one GPU, there is another way can achieve it.
|
Hi,
I ran train.sh, but I had some errors.
I think there are bugs.
Should I add "input = input.cuda()" to around line 138 in train_pose.py and change line 63 "params_dict = dict(model.module.named_parameters())" in train_pose.py to "params_dict = dict(model.named_parameters())" .
The text was updated successfully, but these errors were encountered: