-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
TypeError #1
Comments
I will test and get back to you as soon as possible. |
Thanks! Looking forward to your reply. |
I already found the problem, I fixed it by adding the following line.
This is the semantic features described in the paper, should be 3-dimension, (batch_size x cutoff x semantic_dimension). Where we use cutoff = 15, that is to say we use 15 detected words as input, semantic_dimension=300, that saying we use GloVe feature with 300 dimension. Thanks for testing the code (I can't test it since I can't access the cluster at the moment), feel free if you have any further questions, I would be able to help you. |
Thanks, I'll try again! |
When I run the "python train.py --saveto commoncraw_pretrained --dataset commoncrawl --cutoff 15", the code beginning is good, not after 4379 steps: |
I already fixed the bug by changing the line in generate_caps.py
When f_init is a list, the ensemble decoding will be automatically aroused. Thanks for your testing, feel free to ask further questions. |
Thank you for your help! I can run the code, but I have a question: why the computing CIDEr score is too hight(CIDEr: 3.819)? The best CIDEr score on the Microsoft COCO Image Captioning Challenge only 1.146. |
Did you use the ms-coco dataset for testing (not commoncrawl)? Notice that my split could be different from your dataset split. Please refer to "ETHZ-Bootstrapped-Captioning/Data/coco/", there are files named "caption-train/val/test.json", 5000/5000 are used for val/test while the rest for training, the split strategy comes from Karpathy's github, you should verify that the training data doesn't contain your validation data. Otherwise, you might need to resplit your dataset. |
o, I know!Thanks! |
When I run the "python train.py --saveto commoncraw_pretrained --dataset commoncrawl --cutoff 15", the got the following error:
Traceback (most recent call last):
File "train.py", line 341, in
train(**common_kwargs)
File "train.py", line 215, in train
cost = f_grad_shared(x, mask, ctx, cnn_feats)
File "/usr/local/lib/python2.7/dist-packages/theano/compile/function_module.py", line 786, in call
allow_downcast=s.allow_downcast)
File "/usr/local/lib/python2.7/dist-packages/theano/tensor/type.py", line 177, in filter
data.shape))
TypeError: ('Bad input argument to theano function with name "attention_generator/optimizers.py:64" at index 2(0-based)', 'Wrong number of dimensions: expected 3, got 2 with shape (256, 4500).')
How I solve it? Thanks!
The text was updated successfully, but these errors were encountered: