-
Notifications
You must be signed in to change notification settings - Fork 178
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Refactor evaluation loop for empty frames #858
Refactor evaluation loop for empty frames #858
Conversation
Tensorboard is no longer a required module in pytorch lightning, so need to be explictly installed. I am updating the tests since this got caught more as a side-effect and not anything to do with tensorboard. |
@henrykironde this is ready, sorry for the delay with all the docstring commits, my sphinx version was outdated. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@bw4sz, this looks good! One thought: could we handle the try blocks more specifically? They feel a bit broad as they are.
692bcc7
to
68a69d7
Compare
I rebased, I noticed that somehow the batch tests were commented out, I'll make a separate PR, that is unrelated. |
This PR aims to standardize, document and better test those situations in which 1) the model doesn't make predictions, but there are ground truth to evaluate and 2) where there are no ground truth in validation, but the model makes predictions. I introduce a new evaluation metric 'empty frame accuracy' that will be useful for many users, I added tests and docs. During this, I found that the test that was called validation_step wasn't fully testing validation_step, but rather trainer.validate, which is related, but not the same. I added a proper validation step test, which required silencing the loggers, due to pytorch lightning being overeager in throwing errors if self.validation_step is run outside of trainer.validate, which is only needed for testing situations.