We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
这个坑遇到过两次了,多卡联合训练的时候模型直接存储会多一个module。很多时候用dataparallel测试不太现实。 解决办法1:
# save model if num_gpu == 1: torch.save(model.module.state_dict(), 'net.pth') else: torch.save(model.state_dict(), 'net.pth')
办法2: 把训练好的模型里的model字符删除(我目前用的主要是这种)反正也不麻烦。
pth = torch.load('./626.pth') from collections import OrderedDict new_state_dict = OrderedDict() for k, v in pth.items(): name = k[7:] # remove 'module' new_state_dict[name]=v model.load_state_dict(new_state_dict) model.eval()
原文链接:https://blog.csdn.net/szn1316159505/article/details/129225188
The text was updated successfully, but these errors were encountered:
AlexiFeng
No branches or pull requests
这个坑遇到过两次了,多卡联合训练的时候模型直接存储会多一个module。很多时候用dataparallel测试不太现实。
解决办法1:
办法2:
把训练好的模型里的model字符删除(我目前用的主要是这种)反正也不麻烦。
The text was updated successfully, but these errors were encountered: