You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
First of all, thank you very much for making your code available and for the interesting work presented in your paper "AdaProp: Learning Adaptive Propagation for Graph Neural Network based Knowledge Graph Reasoning". I have a few questions regarding the implementation and some specific details of the model. I would be grateful if you could provide some clarification on the following points:
I also want to mention that my background in this area is still developing, so some of my questions might seem a bit basic. I hope you don't mind, and I truly appreciate your patience in helping me understand these concepts better.
Inverse Relationships: Regarding the addition of inverse relations, could you please explain the purpose of adding these inverse relations? Does this imply that the directional nature of relationships in the knowledge graph becomes less significant?
Loss Function: It seems that the loss function described in the paper does not exactly match the one implemented in the code. Specifically, I found the following:
Could you provide some insights into this discrepancy?
Full Propagation Mechanism in CompGCN: In the case of CompGCN, do you consider "Full propagation" to mean that after each GNN layer, the embeddings of all nodes in the graph are updated simultaneously?
Gumbel Sampling: I understand that Gumbel sampling allows for gradient calculation on discrete variables during backpropagation. Could you please elaborate on why you chose to still use the straight-through (ST) estimator in this context?
Thank you in advance for your time and consideration. Your guidance on these questions would be immensely helpful in understanding the details of your implementation.
Best regards.
The text was updated successfully, but these errors were encountered:
Dear authors,
First of all, thank you very much for making your code available and for the interesting work presented in your paper "AdaProp: Learning Adaptive Propagation for Graph Neural Network based Knowledge Graph Reasoning". I have a few questions regarding the implementation and some specific details of the model. I would be grateful if you could provide some clarification on the following points:
I also want to mention that my background in this area is still developing, so some of my questions might seem a bit basic. I hope you don't mind, and I truly appreciate your patience in helping me understand these concepts better.
Inverse Relationships: Regarding the addition of inverse relations, could you please explain the purpose of adding these inverse relations? Does this imply that the directional nature of relationships in the knowledge graph becomes less significant?
Loss Function: It seems that the loss function described in the paper does not exactly match the one implemented in the code. Specifically, I found the following:
loss = torch.sum(- pos_scores + max_n + torch.log(torch.sum(torch.exp(scores - max_n), 1)))
Could you provide some insights into this discrepancy?
Full Propagation Mechanism in CompGCN: In the case of CompGCN, do you consider "Full propagation" to mean that after each GNN layer, the embeddings of all nodes in the graph are updated simultaneously?
Gumbel Sampling: I understand that Gumbel sampling allows for gradient calculation on discrete variables during backpropagation. Could you please elaborate on why you chose to still use the straight-through (ST) estimator in this context?
Thank you in advance for your time and consideration. Your guidance on these questions would be immensely helpful in understanding the details of your implementation.
Best regards.
The text was updated successfully, but these errors were encountered: