-
Notifications
You must be signed in to change notification settings - Fork 3.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Rename and fix overfit_pct, val_percent_check and test_percent_check #2213
Conversation
Hello @williamFalcon! Thanks for updating this PR. There are currently no PEP 8 issues detected in this Pull Request. Cheers! 🍻 Comment last updated at 2020-06-17 11:54:19 UTC |
@williamFalcon are you trying to fix this: #1920 ? |
@awaelchli partly. also enabling this to take a number of batches instead of just a percent. |
besides the new Trainer flags, it's not easy to see what's the difference to #1920. It seems you have re-implemented it. |
so shall #1920 be merged first? otherwise, it may raise plenty of conflicts... |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
let's merge #1920 first 🐰
This pull request is now in conflict... :( |
Codecov Report
@@ Coverage Diff @@
## master #2213 +/- ##
======================================
- Coverage 88% 88% -0%
======================================
Files 69 69
Lines 5269 5289 +20
======================================
+ Hits 4625 4637 +12
- Misses 644 652 +8 |
btw, this was not reviewed as @awaelchli @Borda asked to rebase it on #1920 |
I wonder if overfit_batches can be removed entirely. It's good to strive for an API where there's only one way to do things, and I feel like the problem with these trainer args is that it's not clear what the difference between setting overfit_batches=1 or limit_val_batches=1 and limit_train_val=1 is? Regardless, this PR is still an improvement to how it was before. Is there any reason you didn't put deprecation warnings in? |
This was a two-part PR. This was the time to remove overfit but i realized it actually serves a different approach. overfit_batches will actually use the train set for train, val and test so you can truly overfit. It also removes the shuffle from the training set so you iterate over the same exact stuff. However, the other 3 flags don't serve that purpose. Those are more for only using some of the data for training, val or test (likely for debugging purposes) |
Hi, I'm getting an error
from a line added in this PR: 04c794c#diff-c611176c04f224f1ec16a768aec9c527R101 Turns out
I'm writing this here instead in a new issue, as this is a very fresh PR, I hope that's ok. Also, I wonder how this got past the tests. |
@pwl good catch, mind sending a fix PR? |
Fixes a minor bug introduced in Lightning-AI#2213
Fixes a minor bug introduced in Lightning-AI#2213
Fixes a minor bug introduced in #2213
Fixes #1668
Fixes #1920
Fixes #830
Overfit
overfit_pct
overfit_batches
which can be a float (%) or int (steps)limit_xxx_batches
train_percent_check
andval_percent_check
limit_train_batches
andlimit_val_batches
which can also be int (steps) or percent (float)docs + tests
@tullie these flag names make sense?
TODO: also replace train_percent_check with limit_train_batches