Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

what is the use of adding a fc layer when reloading the pretrained model #12

Open
qy-feng opened this issue Dec 21, 2017 · 3 comments
Open

Comments

@qy-feng
Copy link

qy-feng commented Dec 21, 2017

With some error with gpu memory, I've got to relance the train part.
In the loading part, a new fc layer is added. What does it mean?

# state_dict = torch.load(args.pretrained)['state_dict']
# from collections import OrderedDict
# new_state_dict = OrderedDict()
# for k, v in state_dict.items():
        # name = k[7:]
        # new_state_dict[name] = v
# model.load_state_dict(new_state_dict)
# model.fc = nn.Linear(2048, 80)
@last-one
Copy link
Owner

last-one commented Dec 21, 2017

Sorry. I forgot to remove it. It's redundant.

@qy-feng
Copy link
Author

qy-feng commented Dec 21, 2017

Thanks for your response.
By the way, do you have any script to evaluate the trained model on a test set to get mAP?

@pingjun18-li
Copy link

yea,I have same question ,as we used the caffe version.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants