-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Training my own model #1
Comments
I believe you might be using the wrong |
Hi Seth, thanks for the response! However, I am using the pretrained VQA model, so I did not generate adict.json or vdict.json files (as specified in the instructions). What is the vocab.json file you are referring to and how do I swap it out? Did you mean vdict.json? |
Sorry, I meant the |
Oh ok, so I should create a new The command I'm running is: |
Hey Seth, so I recreated my |
Correct. |
Hey! I have been facing similar problem. I am not using the pretrained model but training from scratch on my data. I see improvement in explanation loss and accuracy but the vqa loss doesn't seem to reduce. What could potentially go wrong? If you have any insights, it would be really helpful. |
Hi. I've been trying to train my own models for vqa-x, but when I try to generate explanations on models I trained using train.py, my vqa answers and explanations are horrible.
I train for 50k iterations and the print results during training look great, but when I run generate_explanations.py, I would only get 1/1459 vqa answers correct and my explanations look off. In fact, when I test the training set, I only get 15/29459 correct. However, when I use your pretrained model, I get 1073/1459 correct using generate_explanations.py for the validation set.
Is there a step I'm missing going from training my own model using train.py to generating explanations using generate_explanations.py? Training more iterations (going from 30k to 50k) doesn't seem to be improving this issue.
Thanks
The text was updated successfully, but these errors were encountered: