Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Questions about AttributeError: Missing attribute "user_labels_for_eval" #47

Open
ShenZC25 opened this issue Mar 1, 2023 · 6 comments

Comments

@ShenZC25
Copy link

ShenZC25 commented Mar 1, 2023

Hello, thank you for your excellent work! I just started running your program according to the description file, but I have encountered some problems.

First of all, I ran 'python DeepDPM.py --dataset synthetic --log_emb every_n_epochs --log_emb_every 1' according to the instructions, but reported the error AttributeError: Missing Attribute "user _ labels _ for _ eval". I don't know how to set this user_labels_for_eval. (I remember that the documentation said that this was not needed for training, but I don't know why I reported an error.)

Secondly, you mentioned that you don't need to download the data set during raw data training, and it will be automatically downloaded to data ('When training on raw data (e.g., on MNIST, Reuters10k) the data for MNIST will be automatically downloaded to the "data" directory. For reuters10k, the user needs to download the dataset independently (available online) into the "data" directory.'). Is this training directly from the original data set online? Because I didn't see the data folder (my understanding is that the program will automatically download the data set and generate a response folder)

Thank you for your answer.

@arthur-ver
Copy link

arthur-ver commented Mar 1, 2023

Hi! In regards to your first question, I've had a similar issue and it's a typo in the code. Check out my pull request with the fix: #45

@ShenZC25
Copy link
Author

ShenZC25 commented Mar 1, 2023

Hi! In regards to your first question, I've had a similar issue and it's a typo in the code. Check out my pull request with the fix: #45

Thank you for your answer. It was very useful and helped me solve problem 1. But after running, a new problem appeared immediately, that is,' Tensordataset' object has no attribute' data' (I think it is related to question 2? ) Have you ever met it?

Finished training! Traceback (most recent call last): File "DeepDPM.py", line 449, in <module> train_cluster_net() File "DeepDPM.py", line 433, in train_cluster_net data = dataset.data AttributeError: 'TensorDataset' object has no attribute 'data'

@ShenZC25
Copy link
Author

ShenZC25 commented Mar 2, 2023

Hi! In regards to your first question, I've had a similar issue and it's a typo in the code. Check out my pull request with the fix: #45

Thank you for your answer. It was very useful and helped me solve problem 1. But after running, a new problem appeared immediately, that is,' Tensordataset' object has no attribute' data' (I think it is related to question 2? ) Have you ever met it?

Finished training! Traceback (most recent call last): File "DeepDPM.py", line 449, in <module> train_cluster_net() File "DeepDPM.py", line 433, in train_cluster_net data = dataset.data AttributeError: 'TensorDataset' object has no attribute 'data'

And there's a new error:
DeepDPM.py", line 449, in
train_cluster_net()
File "DeepDPM.py", line 428, in train_cluster_net
trainer.fit(model, train_loader, val_loader)
File "D:\Programs\Python\Python38\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 499, in fit
self.dispatch()
File "D:\Programs\Python\Python38\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 546, in dispatch
self.accelerator.start_training(self)
File "D:\Programs\Python\Python38\lib\site-packages\pytorch_lightning\accelerators\accelerator.py", line 73, in start_training
self.training_type_plugin.start_training(trainer)
File "D:\Programs\Python\Python38\lib\site-packages\pytorch_lightning\plugins\training_type\training_type_plugin.py", line 114, in start_training
self._results = trainer.run_train()
File "D:\Programs\Python\Python38\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 637, in run_train
self.train_loop.run_training_epoch()
File "D:\Programs\Python\Python38\lib\site-packages\pytorch_lightning\trainer\training_loop.py", line 560, in run_training_epoch
self.trainer.logger_connector.log_train_epoch_end_metrics(
File "D:\Programs\Python\Python38\lib\site-packages\pytorch_lightning\trainer\connectors\logger_connector\logger_connector.py", line 415, in log_train_epoch_end_metrics
self.training_epoch_end(model, epoch_output, num_optimizers)
File "D:\Programs\Python\Python38\lib\site-packages\pytorch_lightning\trainer\connectors\logger_connector\logger_connector.py", line 466, in training_epoch_end
epoch_output = model.training_epoch_end(epoch_output)
File "D:\PyCharm_Projects\DeepDPM\src\clustering_models\clusternet_modules\clusternetasmodel.py", line 549, in training_epoch_end
self.plot_utils.update_colors(self.split_performed, self.mus_ind_to_split, self.mus_inds_to_merge)
File "D:\PyCharm_Projects\DeepDPM\src\clustering_models\clusternet_modules\utils\plotting_utils.py", line 425, in update_colors
self.update_colors_split(split_inds)
File "D:\PyCharm_Projects\DeepDPM\src\clustering_models\clusternet_modules\utils\plotting_utils.py", line 431, in update_colors_split
mask = torch.zeros(len(self.colors), dtype=bool)
TypeError: object of type 'NoneType' has no len()

@arthur-ver
Copy link

Hi! I've only tried working with custom datasets and haven't encountered this issue yet

@ShenZC25
Copy link
Author

ShenZC25 commented Mar 2, 2023

你好!我只尝试使用自定义数据集,还没有遇到这个问题

Thank you for your patient answer. I will continue to study it.

@meitarronen
Copy link
Contributor

Hey @ShenZC25 !
Thank you @arthur-ver for your assistance in this issue also, I've updated the code too.

Regarding the second problem - we use the PyTorch interface to auto-download available datasets to your directory of choice (e.g., data). For example, if you run DeepDPM on the MNIST dataset, it would be automatically downloaded.

If you choose to use other datasets, you would need to download them yourself, and just point DeepDPM to the embeddings location using the --dir flag.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants