Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

the loss become nan #21

Open
niyunsheng opened this issue May 15, 2020 · 1 comment
Open

the loss become nan #21

niyunsheng opened this issue May 15, 2020 · 1 comment

Comments

@niyunsheng
Copy link

when i was trying to recurrence the face-xray, I modified the HRNet-Image-Calssification, but I got a bug that loss is nan.
this is what i added after stage4 in the cls_hrnet.py:

        # Upsampling
        x0_h, x0_w = y_list[0].size(2), y_list[0].size(3)
        x1 = F.interpolate(y_list[1], size=(x0_h, x0_w), mode='bilinear',align_corners=True)
        x2 = F.interpolate(y_list[2], size=(x0_h, x0_w), mode='bilinear',align_corners=True)
        x3 = F.interpolate(y_list[3], size=(x0_h, x0_w), mode='bilinear',align_corners=True)

        x = torch.cat([y_list[0], x1, x2, x3], 1)
        x = self.one_conv2d(x) # one conv2d to make the channel to 1
       
        x = F.interpolate(x, size=(224,224),mode='bilinear',align_corners=True)
        xray = torch.sigmoid(x)

then I found the xray is almost zero and the loss is nan, what's wrong?

I write the loss function below:

    def criterion(pred,target):
        x = torch.add(torch.mul(target,torch.log(pred)),torch.mul(torch.sub(1,target),torch.log(torch.sub(1,pred))))
        loss = -torch.mean(x)
        return loss
@xiaoshuyun
Copy link

hello,do you know how to train my own dataset?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants