You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I just read your paper. One thing making me confused is the description of Graph Attention Layer.
In the paper it states that the input of this layer is h={h_1,h_2,h_3,...h_N}. Should this be {m_1,m_2,...,m_N} ? I.e., the hidden states of the M-LSTM ?
The text was updated successfully, but these errors were encountered:
Hi, I just read your paper. One thing making me confused is the description of Graph Attention Layer.
In the paper it states that the input of this layer is h={h_1,h_2,h_3,...h_N}. Should this be {m_1,m_2,...,m_N} ? I.e., the hidden states of the M-LSTM ?
The text was updated successfully, but these errors were encountered: