Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Are the different kinds of attention in "image_attention.py" redundant? #25

Closed
BasselAli1 opened this issue Dec 17, 2018 · 1 comment
Closed

Comments

@BasselAli1
Copy link

Hi 😅
In image_attention.py There are three classes

concatenate_attention
project_attention
double_project_attention

But they are not being used at all and there is no place for them to be called in the code from other file or function.(I think we don't need them because we use the MFH model)
We only uses top_down_attention class in the build_image_attention_module function.
My questions are, Are they redundant? and If we wanted to use the ordinary concatenate_attention or project_attention should I modify the build_image_attention_module function to

return concatenate_attention(image_feat_dim, txt_rnn_embeding_dim, hidden_size)

?

@YuJiang01
Copy link
Contributor

yes, currently we didnot use different attention mechanism

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants