We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Anyone else getting a negative loss value when using bce_dice_loss?
def bce_dice_loss(y_true, y_pred): return 0.5*binary_crossentropy(y_true, y_pred) - dice_coef(y_true, y_pred) def dice_coef_loss(y_true, y_pred): return 1. - dice_coef(y_true, y_pred) def dice_coef(y_true, y_pred): smooth = 1. y_true_f = K.flatten(y_true) y_pred_f = K.flatten(y_pred) intersection = K.sum(y_true_f * y_pred_f) return (2. * intersection + smooth) / (K.sum(y_true_f) + K.sum(y_pred_f) + smooth)
The text was updated successfully, but these errors were encountered:
@jcarta did you get any solution for it?
Sorry, something went wrong.
Hi @jcarta and @Swathi-Guptha
If you want to have a positive loss value, you can simply add a constant number 1. to the loss. That is
def bce_dice_loss(y_true, y_pred): return 1.0 + 0.5*binary_crossentropy(y_true, y_pred) - dice_coef(y_true, y_pred)
Please note that this constant number would not impact gradient descent.
Hope this helps you.
Zongwei
So, the formula used is perfect for getting accurate model training?
I was reading about bce dice loss and came across a different formula: def DiceBCELoss(targets, inputs, smooth=1e-6):
#flatten label and prediction tensors inputs = K.flatten(inputs) targets = K.flatten(targets) BCE = binary_crossentropy(targets, inputs) intersection = K.sum(K.dot(targets, inputs)) dice_loss = 1 - (2*intersection + smooth) / (K.sum(targets) + K.sum(inputs) + smooth) Dice_BCE = BCE + dice_loss return Dice_BCE
May i know what is the difference between two?
No branches or pull requests
Anyone else getting a negative loss value when using bce_dice_loss?
The text was updated successfully, but these errors were encountered: