-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add different GNN models #60
Conversation
flexynesis/modules.py
Outdated
self.fc1 = nn.Linear(input_dim, hidden_dim).to(self.device) | ||
self.bn_ff1 = nn.BatchNorm1d(hidden_dim).to(self.device) | ||
|
||
self.fc2 = nn.Linear(hidden_dim, output_dim).to(self.device) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is the motivation of using fully connected layers here?
flexynesis/modules.py
Outdated
x = self.activation(self.bn_ff1(self.fc1(x))) | ||
x = self.dropout(x) | ||
|
||
out = self.fc2(x) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same
@@ -261,6 +432,7 @@ def forward(self, x, edge_index, batch): | |||
Returns: | |||
Tensor: The output tensor after processing through the GCNN, with shape [num_nodes, output_dim]. | |||
""" | |||
#print('batch:', batch.x) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit
#print('batch:', batch.x) |
- reorder, cleanup code - remove device management from the module - set number_layers, dropout, activation function as hyperparameters - make convolution type an option exposed to DirectPredGCNN main class - add GraphConv as an additional convolution option
…ution from CLI; make sanity checks
x = self.dropout(self.activation(batch_norm(conv(x, edge_index)))) | ||
else: | ||
x = self.dropout(self.activation(batch_norm(conv(x, edge_index)))) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Dropout will damage the embeddings during training step no?
x = self.dropout(self.activation(batch_norm(conv(x, edge_index)))) | ||
else: | ||
x = self.dropout(self.activation(batch_norm(conv(x, edge_index)))) | ||
return x |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why don't you aggregate here? like in forward
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
again DO will damage the embeddings during training
@borauyar
potential source of problem with new GNN training
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We don't actually use the extract_embeddings function; the embeddings are extracted from the forward
function, which has the same lines. You think we need to disable dropout in the forward?
Hey @trsvchn,
I added new GNN architectures in this branch.