You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
self.proxies is defined in Line 14, however, it doesn't move to the same device as embeddings explicitly. An error will raise when I use CUDA
Traceback (most recent call last):
File "base.py", line 130, in <module>
main()
File "base.py", line 126, in main
trainer.train(num_epochs=num_epochs)
File "/usr/local/lib/python3.7/site-packages/pytorch_metric_learning/trainers/base_trainer.py", line 85, in train
self.forward_and_backward()
File "/usr/local/lib/python3.7/site-packages/pytorch_metric_learning/trainers/base_trainer.py", line 112, in forward_and_backward
self.calculate_loss(self.get_batch())
File "/usr/local/lib/python3.7/site-packages/pytorch_metric_learning/trainers/metric_loss_only.py", line 12, in calculate_loss
self.losses["metric_loss"] = self.maybe_get_metric_loss(embeddings, labels, indices_tuple)
File "/usr/local/lib/python3.7/site-packages/pytorch_metric_learning/trainers/metric_loss_only.py", line 16, in maybe_get_metric_loss
return self.loss_funcs["metric_loss"](embeddings, labels, indices_tuple)
File "/usr/local/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.7/site-packages/pytorch_metric_learning/losses/base_metric_loss_function.py", line 37, in forward
loss_dict = self.compute_loss(embeddings, labels, indices_tuple)
File "/usr/local/lib/python3.7/site-packages/pytorch_metric_learning/losses/proxy_anchor_loss.py", line 23, in compute_loss
cos = lmu.sim_mat(embeddings, prox)
File "/usr/local/lib/python3.7/site-packages/pytorch_metric_learning/utils/loss_and_miner_utils.py", line 27, in sim_mat
return torch.matmul(x, y.t())
RuntimeError: Expected object of device type cuda but got device type cpu for argument #2 'mat2' in call to _th_mm
My solution is adding this one line code before sim_mat is called
prox = prox.to(embeddings.device)
The text was updated successfully, but these errors were encountered:
KevinMusgrave
changed the title
A bug in ProxyAnchorLoss
Calling .to(device) on classification loss functions, or calling .to(device) on the parameters inside forward()
Jul 14, 2020
However, I can also do what you've suggested, to make it more convenient. If I make this change, it'll be to all loss functions with a weight matrix (ArcFace, NormalizedSoftmaxLoss etc...)
In my opinion, it's still better to move the loss function to the device like in my previous comment. This is because it should be on the device before you create the loss function's optimizer.
Hello, I have been using this library recently. I find a bug in ProxyAnchorLoss.
pytorch-metric-learning/src/pytorch_metric_learning/losses/proxy_anchor_loss.py
Lines 11 to 23 in a637d48
self.proxies
is defined in Line 14, however, it doesn't move to the same device asembeddings
explicitly. An error will raise when I use CUDAMy solution is adding this one line code before
sim_mat
is calledThe text was updated successfully, but these errors were encountered: