-
Notifications
You must be signed in to change notification settings - Fork 451
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
model.eval() seems not work well #35
Comments
Are you training with evaluation mode? |
Thx for clarifying! What is the mIoU you were expecting? For example, what is the mIoU when using standard BatchNorm at 1st epoch? |
Am I missing something. You said, "i turn to model.eval(), i got explosion of loss." |
Please checkout the PyTorch compatible Synchronized Cross-GPU encoding.nn.BatchNorm2d and the example. |
@mapleneverfade The sycBN works the same as standard BN in eval mode. Please try the new PyTorch DataParallel compatible version. |
Hi, have you checked out the example https://github.com/zhanghang1989/PyTorch-SyncBatchNorm |
Hi @mapleneverfade , do you still have the problem? I still couldn't get it why do you calculate loss in eval mode |
It's very kind of your work ! There's still something i want for your help!
My code for model and criterion parallel like this:
model = encoding.parallel.ModelDataParallel(model,device_ids=[0,1,2])
criterion = encoding.parallel.CriterionDataParallel(criterion,device_ids=[0,1,2])
Training process is going well, but when i turn to model.eval(), i got explosion of loss.
Is there something wrong with model.eval()?
The text was updated successfully, but these errors were encountered: