You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Traceback (most recent call last):
File "demo.py", line 48, in <module>
pl.Trainer(max_epochs=20, gpus=1).fit(module)
File "/home/n/repos/pytorch-lightning/pytorch_lightning/trainer/trainer.py", line 749, in fit
self.single_gpu_train(model)
File "/home/n/repos/pytorch-lightning/pytorch_lightning/trainer/distrib_parts.py", line 491, in single_gpu_train
self.run_pretrain_routine(model)
File "/home/n/repos/pytorch-lightning/pytorch_lightning/trainer/trainer.py", line 910, in run_pretrain_routine
self.train()
File "/home/n/repos/pytorch-lightning/pytorch_lightning/trainer/training_loop.py", line 384, in train
self.run_training_epoch()
File "/home/n/repos/pytorch-lightning/pytorch_lightning/trainer/training_loop.py", line 456, in run_training_epoch
_outputs = self.run_training_batch(batch, batch_idx)
File "/home/n/repos/pytorch-lightning/pytorch_lightning/trainer/training_loop.py", line 633, in run_training_batch
loss, batch_output = optimizer_closure()
File "/home/n/repos/pytorch-lightning/pytorch_lightning/trainer/training_loop.py", line 597, in optimizer_closure
output_dict = self.training_forward(split_batch, batch_idx, opt_idx, self.hiddens)
File "/home/n/repos/pytorch-lightning/pytorch_lightning/trainer/training_loop.py", line 770, in training_forward
output = self.model.training_step(*args)
File "demo.py", line 40, in training_step
yhat = self.forward(batch.x1, batch.x2)
AttributeError: 'tuple' object has no attribute 'x1'
Expected behavior
Namedtuples returned from the dataset should be keep their original fields.
I am having similar troubles with multiGpu setup? Is that fixed for multiple gpus in the pr? If not I believe this should be reopened.
In my case everything works fine for single gpu but with 2 gpus I get the error AttributeError: 'tuple' object has no attribute 'image'
But it shouldn't be tuple on the error line, there should be namedtuple
🐛 Bug
Named tuples returned from
Dataset
get converted to regular tuples when sent to the gpu.This happens because
isinstance(instance_of_a_named_tuple, tuple)
evaluates to True indistrib_parts.py
https://github.com/PyTorchLightning/pytorch-lightning/blob/67d5f4dc392250d23bfeb11aba45e919a99ff1c0/pytorch_lightning/trainer/distrib_parts.py#L463
To Reproduce
Expected behavior
Namedtuples returned from the dataset should be keep their original fields.
Environment
- GPU:
- GeForce RTX 2080 Ti
- available: True
- version: 10.2
- numpy: 1.18.3
- pyTorch_debug: False
- pyTorch_version: 1.5.0
- pytorch-lightning: 0.7.4rc5
- tensorboard: 2.2.1
- tqdm: 4.45.0
- OS: Linux
- architecture:
- 64bit
- ELF
- processor:
- python: 3.8.2
- version: Proposal for help #1 SMP PREEMPT Sun, 05 Apr 2020 05:13:14 +0000
The text was updated successfully, but these errors were encountered: