Skip to content

MultiLayer Graph Attention Networks #92

Answered by danielegrattarola
adrinta asked this question in Q&A
Discussion options

You must be logged in to vote

Hi,

The code to implement a multi-layer GAT is correct. You don't need to pass a different adjacency matrix because the graph structure (the edges) does not change after a GAT layer, but only the node attributes change.

The adjacency matrix is not technically replaced by the attention, but you can see it as a re-scaling. Given an edge between two nodes, the adjacency matrix will give you a "weight" of 1, while the attention mechanism will rescale that 1 to account for how important that connection is.

The two models that you posted are semantically different.
The Input layer in Keras assumes that there is an implicit batch size that you do not specify in the shape keyword.
So, your first …

Replies: 2 comments 3 replies

Comment options

You must be logged in to vote
0 replies
Comment options

You must be logged in to vote
3 replies
@laowangzi
Comment options

@danielegrattarola
Comment options

@laowangzi
Comment options

Answer selected by danielegrattarola
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
3 participants
Converted from issue

This discussion was converted from issue #92 on December 09, 2020 18:11.