Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

auto_lr_find does not work #1983

Closed
krisho007 opened this issue May 28, 2020 · 10 comments · Fixed by #2821
Closed

auto_lr_find does not work #1983

krisho007 opened this issue May 28, 2020 · 10 comments · Fixed by #2821
Assignees
Labels
help wanted Open to be worked on priority: 0 High priority task

Comments

@krisho007
Copy link

🐛 Bug

I am using auto_lr_find feature as below.
trainer = pl.Trainer(fast_dev_run=False, gpus=1, auto_lr_find=True)

My model has the self.learning_rate parameter as below (part of the model).

class TweetSegment(pl.LightningModule):
    def __init__(self, config, lr=3e-5):
        super(TweetSegment, self).__init__()
        self.bert = BertModel.from_pretrained('bert-base-uncased', config=config)
        self.drop_out = nn.Dropout(0.1)
        self.fullyConnected = nn.Sequential(nn.Linear(2*768, 2), nn.ReLU())
        self.learning_rate = lr
        self._init_initial()

    def configure_optimizers(self):
        return torch.optim.AdamW(self.parameters(), lr=self.learning_rate)   

When I 'fit' using below line
trainer.fit(tweetModel, train_dataloader=training_loader, val_dataloaders=valid_loader)
I still get the error
MisconfigurationException: When auto_lr_find is set to True, expects that hparams either has field lrorlearning_ratethat can overridden

Expected behavior

No error while running the 'fit'

Environment

  • CUDA:
    • GPU:
      • Tesla P100-PCIE-16GB
    • available: True
    • version: 10.1
  • Packages:
    • numpy: 1.18.1
    • pyTorch_debug: False
    • pyTorch_version: 1.5.0
    • pytorch-lightning: 0.7.6
    • tensorboard: 2.1.1
    • tqdm: 4.45.0
  • System:
    • OS: Linux
    • architecture:
      • 64bit
    • processor: x86_64
    • python: 3.7.6
    • version: Proposal for help #1 SMP Wed May 6 00:27:44 PDT 2020
@krisho007 krisho007 added the help wanted Open to be worked on label May 28, 2020
@github-actions
Copy link
Contributor

Hi! thanks for your contribution!, great first issue!

@SkafteNicki
Copy link
Member

The problem does not seem to be present on the master branch, could you try upgrading?

@SkafteNicki SkafteNicki mentioned this issue May 29, 2020
5 tasks
@krisho007
Copy link
Author

So this seems to be a bug to be fixed by #1988 ?

@krisho007
Copy link
Author

krisho007 commented May 30, 2020

The problem does not seem to be present on the master branch, could you try upgrading?

I am already on 0.7.6. So I am not sure how to upgrade to the master branch. Can you please guide?

@williamFalcon
Copy link
Contributor

bottom of the docs “bleeding edge”

@Makoto1733
Copy link

I am now having the same question. I am using self.hparams with type of dict and on 0.7.6. Could someone give some suggestions?

@dscarmo
Copy link
Contributor

dscarmo commented Jun 19, 2020

I get the same error, while having hparams.lr, even with 0.8.0.

@williamFalcon
Copy link
Contributor

We need to adjust the learning rate finder to work with the new hparams. @SkafteNicki

@edenlightning
Copy link
Contributor

@SkafteNicki is this still broken on master?

@SkafteNicki
Copy link
Member

@edenlightning I checked this morning, and the problem still seems to be present. I will create a PR soon with a fix.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Open to be worked on priority: 0 High priority task
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants