Replies: 1 comment 1 reply
-
Hey, this is an issue that you should post on the repository you took the loss from. You may also want to take a look at this repo: https://github.com/JunMa11/SegLoss |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
the AveragedHausdorffLoss is copied from https://github.com/HaipengXiong/weighted-hausdorff-loss/blob/master/object-locator/losses.py
and new trainer's code:
error message:
epoch: 0
Traceback (most recent call last):
File "/root/miniconda3/envs/myconda/bin/nnUNet_train", line 33, in
sys.exit(load_entry_point('nnunet', 'console_scripts', 'nnUNet_train')())
File "/mnt/nnUNet/nnunet/run/run_training.py", line 179, in main
trainer.run_training()
File "/mnt/nnUNet/nnunet/training/network_training/nnUNetTrainerV2.py", line 440, in run_training
ret = super().run_training()
File "/mnt/nnUNet/nnunet/training/network_training/nnUNetTrainer.py", line 317, in run_training
super(nnUNetTrainer, self).run_training()
File "/mnt/nnUNet/nnunet/training/network_training/network_trainer.py", line 456, in run_training
l = self.run_iteration(self.tr_gen, True)
File "/mnt/nnUNet/nnunet/training/network_training/nnUNetTrainerV2.py", line 249, in run_iteration
l = self.loss(output, target)
File "/root/miniconda3/envs/myconda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/mnt/nnUNet/nnunet/training/loss_functions/deep_supervision.py", line 39, in forward
l = weights[0] * self.loss(x[0], y[0])
File "/root/miniconda3/envs/myconda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1108, in _call_impl
if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
File "/root/miniconda3/envs/myconda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1185, in getattr
raise AttributeError("'{}' object has no attribute '{}'".format(
AttributeError: 'AveragedHausdorffLoss' object has no attribute '_backward_hooks'
Exception in thread Thread-4:
Traceback (most recent call last):
File "/root/miniconda3/envs/myconda/lib/python3.8/threading.py", line 932, in _bootstrap_inner
self.run()
File "/root/miniconda3/envs/myconda/lib/python3.8/threading.py", line 870, in run
self._target(*self._args, **self._kwargs)
File "/root/miniconda3/envs/myconda/lib/python3.8/site-packages/batchgenerators/dataloading/multi_threaded_augmenter.py", line 92, in results_loop
raise RuntimeError("Abort event was set. So someone died and we should end this madness. \nIMPORTANT: "
RuntimeError: Abort event was set. So someone died and we should end this madness.
IMPORTANT: This is not the actual error message! Look further up to see what caused the error. Please also check whether your RAM was full
Exception in thread Thread-5:
Traceback (most recent call last):
File "/root/miniconda3/envs/myconda/lib/python3.8/threading.py", line 932, in _bootstrap_inner
self.run()
File "/root/miniconda3/envs/myconda/lib/python3.8/threading.py", line 870, in run
self._target(*self._args, **self._kwargs)
File "/root/miniconda3/envs/myconda/lib/python3.8/site-packages/batchgenerators/dataloading/multi_threaded_augmenter.py", line 92, in results_loop
raise RuntimeError("Abort event was set. So someone died and we should end this madness. \nIMPORTANT: "
RuntimeError: Abort event was set. So someone died and we should end this madness.
IMPORTANT: This is not the actual error message! Look further up to see what caused the error. Please also check whether your RAM was full
Beta Was this translation helpful? Give feedback.
All reactions