-
Notifications
You must be signed in to change notification settings - Fork 3.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
GATConv saves weights compatibility? #1755
Comments
You basically need to ensure that: conv.lin_l.weight == conv_old.weight
conv.att_l.weight == conv_old.att[:, :, :out_channels]
conv.att_r.weight == conv_old.att[:, :, out_channels:] |
OK. This is my conversion: |
I checked that. You also need to transpose the
That yields an equal result for me. |
Hi I got the same problem. Do you find a easy solution for it? |
@chushan89, I use the conversion method provided by @rusty1s, it works for 1.5.0 version. When I use the method for 1.6.1, the converted model computation has some little deviation value. I guess the lin_r transform module introduced in 1.6.1 has some influence. For now I just set its weight same with lin_l. |
Hello @rusty1s ,
Within this implementation, it seems to be that self.att_r is not used and both representations within the concatenation are multiplied with the same vector self.att_l with dimension (1, out_channels). In the paper, however, the vector has dimension (1, 2 * out_channels), so that both parts of the concatenation are multiplied with individually learnable parameters. Could you explain, if the implementation is indeed different from the paper? Thank you very much! |
Yeah, that is already fixed in master, see https://github.com/rusty1s/pytorch_geometric/blob/master/torch_geometric/nn/conv/gat_conv.py#L126-L127. Sorry for the inconveniences! |
Hi, I have the same issue. |
Yes, the above solution should work for the current |
Thank you for your reply and for your work! Unfortunately it is not working with v1.6.3. So far I've been trying to track the relevant changes that may break the compatibility with v1.2.0 onwards and the only change I see when diffing v1.1.2 and v1.2.0 is the computation of the alpha coefficient. - def message(self, x_i, x_j, edge_index, num_nodes):
+ def message(self, edge_index_i, x_i, x_j, num_nodes):
# Compute attention coefficients.
alpha = (torch.cat([x_i, x_j], dim=-1) * self.att).sum(dim=-1)
alpha = F.leaky_relu(alpha, self.negative_slope)
- alpha = softmax(alpha, edge_index[0], num_nodes)
+ alpha = softmax(alpha, edge_index_i, num_nodes) And yeah, forward porting the v1.1.2 Any other idea how to adapt the old weights? |
You may try to swap the values of |
Thank you gain. Unfortunately that didn't work either. I ended up modifying the v1.6.3 |
❓ Questions & Help
I used pytorch_geometric 1.3.2 to trained a model, which used the GATConv module. Recently, I use the latest pyg and found the GATConv has been changed and the old weights file cann't be loaded. I changed the weights file, but can not get same results with old version. Can somebody tell me how to modify the weights file to support latest pyg?
The text was updated successfully, but these errors were encountered: