-
Notifications
You must be signed in to change notification settings - Fork 37
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Some questions about your netR_36.model #11
Comments
@StiphyJay , Do you download the label_02 dataset? Whether the dataset only contains the training set. |
Yes,I download the label_02 dataset, and my dataset directory only contains testing set(0019,0020). Is it possible that the problem with the model caused the operation error? |
Hi, @StiphyJay, yes, the netR_36.model is trained with two gpus to speed up the training phase. But I don't know how to run the model trained with two gpus on a single gpu in Pytorch. In practice, the model trained with two gpus will be tested using two gpus as well. |
Thanks for your reply. Here is my new question? Could you help me solve it? |
I think you might change the original code here. |
Hello, thanks for you job.
I want to know that weather the model(netR_36.model) is trained in two GTX1080Ti gpu? if you are, when i want to test this model in a machine with one GTX1080Ti, this model could weather or not work? When i test it, there are some error in my terminal.
I have modified the code in test_tracking.py when i test.
parser.add_argument('--ngpu', type=int, default=1, help='# GPUs')
os.environ["CUDA_VISIBLE_DEVICES"] = '0'
the error is:
Traceback (most recent call last):
File "test_tracking.py", line 177, in
netR.load_state_dict(torch.load(os.path.join(args.save_root_dir, args.model)))
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 845, in load_state_dict
self.class.name, "\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for Pointnet_Tracking:
I would appreciate it if you could reply. Thank you!
The text was updated successfully, but these errors were encountered: