Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Change backbone finetuning strategy to allow for DDP #205

Merged
merged 1 commit into from
Oct 16, 2024

Conversation

ksikka
Copy link
Collaborator

@ksikka ksikka commented Oct 15, 2024

See Lightning-AI/pytorch-lightning#20340 for the original issue. The implemented workaround no longer freezes/unfreezes layers via require_grad, instead we only modulate the learning rate of the backbone layer.

Unblocks #138

@ksikka ksikka requested review from danbider and themattinthehatt and removed request for danbider October 15, 2024 00:34
@ksikka ksikka force-pushed the main branch 2 times, most recently from 913b9f2 to de97315 Compare October 15, 2024 00:36
@ksikka
Copy link
Collaborator Author

ksikka commented Oct 15, 2024

image

Run with and without code changes:
Capture

lightning_pose/models/base.py Outdated Show resolved Hide resolved
pl_module.current_epoch, backbone_lr, upsampling_lr
)

def _get_next_backbone_lr(self, current_epoch, backbone_lr, upsampling_lr):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

good to comment on the if conditions, what is this the LR schedule that is implemented here

Copy link
Collaborator

@danbider danbider left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

very happy to see these updates -- thanks @ksikka!

lightning_pose/models/base.py Show resolved Hide resolved
tests/test_callbacks.py Show resolved Hide resolved
tests/test_callbacks.py Show resolved Hide resolved
See Lightning-AI/pytorch-lightning#20340 for the original issue.
The implemented workaround no longer freezes/unfreezes layers
via require_grad, instead we only modulate the learning rate
of the backbone layer.
@themattinthehatt themattinthehatt merged commit d6d62e7 into paninski-lab:main Oct 16, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants