Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add different GNN models #60

Merged
merged 5 commits into from
Mar 4, 2024
Merged

add different GNN models #60

merged 5 commits into from
Mar 4, 2024

Conversation

asarigun
Copy link
Member

@asarigun asarigun commented Mar 4, 2024

Hey @trsvchn,

I added new GNN architectures in this branch.

Comment on lines 340 to 343
self.fc1 = nn.Linear(input_dim, hidden_dim).to(self.device)
self.bn_ff1 = nn.BatchNorm1d(hidden_dim).to(self.device)

self.fc2 = nn.Linear(hidden_dim, output_dim).to(self.device)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is the motivation of using fully connected layers here?

Comment on lines 391 to 394
x = self.activation(self.bn_ff1(self.fc1(x)))
x = self.dropout(x)

out = self.fc2(x)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same

@@ -261,6 +432,7 @@ def forward(self, x, edge_index, batch):
Returns:
Tensor: The output tensor after processing through the GCNN, with shape [num_nodes, output_dim].
"""
#print('batch:', batch.x)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit

Suggested change
#print('batch:', batch.x)

borauyar added 3 commits March 4, 2024 16:21
- reorder, cleanup code
- remove device management from the module
- set number_layers, dropout, activation function as hyperparameters
- make convolution type an option exposed to DirectPredGCNN main class
- add GraphConv as an additional convolution option
@borauyar borauyar merged commit c0ebf36 into main Mar 4, 2024
Comment on lines +357 to +359
x = self.dropout(self.activation(batch_norm(conv(x, edge_index))))
else:
x = self.dropout(self.activation(batch_norm(conv(x, edge_index))))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Dropout will damage the embeddings during training step no?

x = self.dropout(self.activation(batch_norm(conv(x, edge_index))))
else:
x = self.dropout(self.activation(batch_norm(conv(x, edge_index))))
return x
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why don't you aggregate here? like in forward

Copy link
Member

@trsvchn trsvchn Mar 4, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

again DO will damage the embeddings during training

@borauyar
potential source of problem with new GNN training

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We don't actually use the extract_embeddings function; the embeddings are extracted from the forward function, which has the same lines. You think we need to disable dropout in the forward?

@borauyar borauyar deleted the gnn_dev branch December 31, 2024 13:12
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants