Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

bce_dice_loss negative loss #49

Open
jcarta opened this issue Jul 21, 2020 · 3 comments
Open

bce_dice_loss negative loss #49

jcarta opened this issue Jul 21, 2020 · 3 comments

Comments

@jcarta
Copy link

jcarta commented Jul 21, 2020

Anyone else getting a negative loss value when using bce_dice_loss?

def bce_dice_loss(y_true, y_pred):
    return 0.5*binary_crossentropy(y_true, y_pred) - dice_coef(y_true, y_pred)

def dice_coef_loss(y_true, y_pred):
    return 1. - dice_coef(y_true, y_pred)

def dice_coef(y_true, y_pred):
    smooth = 1.
    y_true_f = K.flatten(y_true)
    y_pred_f = K.flatten(y_pred)
    intersection = K.sum(y_true_f * y_pred_f)
    return (2. * intersection + smooth) / (K.sum(y_true_f) + K.sum(y_pred_f) + smooth)
@Swathi-Guptha
Copy link

@jcarta did you get any solution for it?

@MrGiovanni
Copy link
Owner

Hi @jcarta and @Swathi-Guptha

If you want to have a positive loss value, you can simply add a constant number 1. to the loss. That is

def bce_dice_loss(y_true, y_pred):
    return 1.0 + 0.5*binary_crossentropy(y_true, y_pred) - dice_coef(y_true, y_pred)

Please note that this constant number would not impact gradient descent.

Hope this helps you.

Zongwei

@Swathi-Guptha
Copy link

So, the formula used is perfect for getting accurate model training?

I was reading about bce dice loss and came across a different formula:
def DiceBCELoss(targets, inputs, smooth=1e-6):

#flatten label and prediction tensors
inputs = K.flatten(inputs)
targets = K.flatten(targets)

BCE =  binary_crossentropy(targets, inputs)
intersection = K.sum(K.dot(targets, inputs))    
dice_loss = 1 - (2*intersection + smooth) / (K.sum(targets) + K.sum(inputs) + smooth)
Dice_BCE = BCE + dice_loss

return Dice_BCE

May i know what is the difference between two?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants