Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Would you please tell the torch seed of 2 pretrained models. #17

Open
sjy234sjy234 opened this issue Feb 24, 2020 · 7 comments
Open

Would you please tell the torch seed of 2 pretrained models. #17

sjy234sjy234 opened this issue Feb 24, 2020 · 7 comments

Comments

@sjy234sjy234
Copy link

Would you please tell the torch seed of 2 pretrained models.
I found the result varies with torch seed, and I ran many times but failed to realize 76.2 for caption retrieval. I only got 75.6 instead.

@ahagary
Copy link

ahagary commented May 27, 2020

In addition to adapting to the python or torch version, did you have modify the code?
I use the provided code, dowload the pretrain model and use:
python evaluation_models.py
but I have only got (i2t: R@1 66.9 t2i:R@1 49.4) on flickr30k_precomp.
would you please provided some advices? Thank you very much! @sjy234sjy234 @KunpengLi1994

@sjy234sjy234
Copy link
Author

In addition to adapting to the python or torch version, did you have modify the code?
I use the provided code, dowload the pretrain model and use:
python evaluation_models.py
but I have only got (i2t: R@1 66.9 t2i:R@1 49.4) on flickr30k_precomp.
would you please provided some advices? Thank you very much! @sjy234sjy234 @KunpengLi1994

When using torch 1+, the result is less than torch 0.4.

@ahagary
Copy link

ahagary commented Jun 4, 2020

I have tried your advise last week.And I think I still should reply to you to express my gratitude for you.Your suggestion is very effective.I use PY2.7 and torch0.4.1, then I got the result R@1 71.5 R@1i 54.8 on f30k_precomp.It's incredible that the version has so much inpact on the results. @sjy234sjy234

@By-he
Copy link

By-he commented Sep 17, 2020

Hello, look at you replicating the author's experiment. I would like to ask you about the problem of "MemoryError: Unable to allocate 31.1 GiB for an Array with shape (8352423936,) and data Type Float32" when running the code. May I ask which parameters can be modified to work normally? My computer configuration is RTX2060, 6G video memory and 16G running memory

@ahagary
Copy link

ahagary commented Sep 21, 2020

Hello, look at you replicating the author's experiment. I would like to ask you about the problem of "MemoryError: Unable to allocate 31.1 GiB for an Array with shape (8352423936,) and data Type Float32" when running the code. May I ask which parameters can be modified to work normally? My computer configuration is RTX2060, 6G video memory and 16G running memory

This is caused by insufficient CPU memory , You can try to divide train_ims. npy in coco_precomp into multiple files, and then change __get_items__ of PrecompDataset. Although the data reading time will increase, but you can run the code normally

@sjy234sjy234
Copy link
Author

sjy234sjy234 commented Sep 22, 2020 via email

@HHeracles
Copy link

@sjy234sjy234 @KunpengLi1994 ,Hello!May I ask you some questions? when I run the command"evaluation.evalrank("../Pretrain/pretrain_model/flickr/model_fliker_1.pth.tar", data_path="data", split="test", fold5=False)".I load the data is "data/f30k_precomp/test_ims.npy".
First, why is (36,2048) the shape of image?What does 36 mean here?36 regions?The image is directly input graphy, which consists of "Rs_GCN".What part of the code completes the generation of Salient Image Regions from bottomup attention?
Second,If I input the image is .jpg, how do I convert it to the desired form (36,2048) to meet the code input?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants