-
Notifications
You must be signed in to change notification settings - Fork 3.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bug and question about logging -- missing epoch, validation before train? #1520
Comments
Hi! thanks for your contribution!, great first issue! |
Just to clarify your bug, you are missing some metrics? If I get it correctly, this shall be your fix #1459 |
Hi @Borda , I don't think that captures it. I'm reporting one clear bug (the epoch field doesn't get to the logger, since it's added to the The second bug I'm reporting may not be a bug, but at the very least is confusing behavior. As a user, I would expect my logger to first get training results for an epoch, and then validation results. The PR you're referencing might solve the problem of the last training results not being returned at all, but won't fix the fact that something in the way steps are used in the logger causes validation results to come before test results. |
I updated under 'Expected Behavior' to clarify the discrepancy I found. |
@guydav mind send a PR? :] |
@Borda for the first issue, sure, it's a one-line fix, but I'd need to set up to be able to run the tests. For the second issue, I don't quite know where it originates from. I was hoping someone with a better understanding of how logging is structured chimes in before I start to try and learn my way around that entire codebase. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
Hi @Borda -- I think there's still another underlying issue with how metrics are reported unless someone interacted with this code over the last while. If you look at the observed behavior and expected behavior steps above, do you agree there's an issue? That it makes no sense for the validation metrics for a particular epoch to report before the training metrics for that epoch, and with a different step at that? |
This behaviour seems to have changed. trainer = Trainer(
max_epochs=2,
gpus=1,
logger=PrintLogger(),
limit_train_batches=10,
limit_val_batches=10,
row_log_interval=1,
progress_bar_refresh_rate=0
) output is 0: {'train_loss': 2.503892660140991, 'epoch': 0}
1: {'train_loss': 2.096820831298828, 'epoch': 0}
2: {'train_loss': 8.215052604675293, 'epoch': 0}
3: {'train_loss': 5.370606422424316, 'epoch': 0}
4: {'train_loss': 5.988080978393555, 'epoch': 0}
5: {'train_loss': 2.3805108070373535, 'epoch': 0}
6: {'train_loss': 4.3501176834106445, 'epoch': 0}
7: {'train_loss': 9.668755531311035, 'epoch': 0}
8: {'train_loss': 6.58243465423584, 'epoch': 0}
# this is the last step of the epoch, metrics get combined and logged together
9: {'epoch': 0.0, 'val_loss': 4.287566661834717, 'train_loss': 12.217967987060547, 'val_acc': 0.515625}
10: {'train_loss': 1.7836229801177979, 'epoch': 1}
11: {'train_loss': 1.7488218545913696, 'epoch': 1}
12: {'train_loss': 2.221280097961426, 'epoch': 1}
13: {'train_loss': 3.4499270915985107, 'epoch': 1}
14: {'train_loss': 3.5983619689941406, 'epoch': 1}
15: {'train_loss': 2.813007116317749, 'epoch': 1}
16: {'train_loss': 3.2659897804260254, 'epoch': 1}
17: {'train_loss': 4.156956672668457, 'epoch': 1}
18: {'train_loss': 2.931321859359741, 'epoch': 1}
# no val logs here :( we expect a dict as in step 9 The original problem you describe seems to be gone, but I notice two other issues:
|
@awaelchli, thank you for looking into it again! I agree this does look better. It's been a while since I dug through this code, but I think I have a hunch for at least one of these issues. Note that in the second issue you point out, we're missing the last set of train metrics, that should arrive with the validation metrics. Reading through LightningLoggerBase, its API seems to be through the function The float epoch thing is probably a smaller bit. I'll try to debug both of these later today if I have time. |
@guydav I checked again, the step 8 was missing from my post by accident because I had to copy paste around some warnings that were printed to the console and it seems I missed one line, but the step 8 is there and I edited my post. |
Oh, I mean that we're missing step 19, which contains both the 10th training batch from the second epoch and the validation metrics for that epoch. |
Yes I agree, that's the big one :) It should definitely log a dict like step 9 |
Update: it appears that I am the problem. I don't know why, but I overrode Here's a printout:
|
Oh great you found this. Last time I tried to debug it I was stuck because we actually have tests for these things and I was very confused why it would not work :) |
🐛 Bug
First, the clear bug: in
TrainerLoggingMixin.log_metrics()
the epoch is added to themetrics
variable (line 70) which is never accessed again. That should be toscalar_metrics
, shouldn't it?Second, a question: I implemented a very primitive logger (to stdout) and logging to it. I don't get training results when the first epoch ends until after the first epoch validation step, and consequently don't get training metrics from the last epochs. See code and sample output below. Does this make sense?
To Reproduce
Add the following code to a Lightning Module and run a trainer with the following logger:
Code sample
Minimal logging in the LightningModule:
A minimal logger:
Expected behavior
I would expect to see the training output for each epoch followed by the validation output for each epoch, for each of the five epochs. Instead, I see the following -- four train outputs and five validation ones, and seeing the validation first:
Observed behavior:
Expected behavior:
Where
n
is the number of steps/batches per epoch:Environment
Additional context
The text was updated successfully, but these errors were encountered: