-
Notifications
You must be signed in to change notification settings - Fork 27
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ceDiceloss #83
Comments
The error may be caused by the class labels in your datasets.
The #label in your predictions are more than that in your datasets.
…---- Replied Message ----
| From | ***@***.***> |
| Date | 10/22/2024 17:17 |
| To | ***@***.***> |
| Cc | ***@***.***> |
| Subject | [JCruan519/VM-UNet] ceDiceloss (Issue #83) |
Why do I get an error when using ceDice as the loss function during training: ../aten/src/ATen/native/cuda/NLLLoss2d.cu:104: nll_loss2d_forward_kernel: block: [0,0,0], thread: [278,0,0] Assertion 't >= 0 && t < n_classes' failed., but when I switch to nDiceLoss, it runs normally? Why is that?
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are subscribed to this thread.Message ID: ***@***.***>
|
I am using the Synapse dataset you provided, with num_classes=9, and the model outputs logits with a shape of (B, 9, 224, 224). Do you know what might be causing the error? |
You can first check whether the conda environment is configured according to the contents of the requirements. |
Sure, thank you very much for your help. |
Why do I get an error when using ceDice as the loss function during training:
../aten/src/ATen/native/cuda/NLLLoss2d.cu:104: nll_loss2d_forward_kernel: block: [0,0,0], thread: [278,0,0] Assertion 't >= 0 && t < n_classes' failed.
, but when I switch to nDiceLoss, it runs normally? Why is that?The text was updated successfully, but these errors were encountered: