Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

panic: runtime error: integer divide by zero #1670

Closed
JavaPerformance opened this issue Aug 22, 2022 · 6 comments
Closed

panic: runtime error: integer divide by zero #1670

JavaPerformance opened this issue Aug 22, 2022 · 6 comments
Labels
good first issue Good for newcomers keepalive Label to exempt Issues / PRs from stale workflow

Comments

@JavaPerformance
Copy link

Describe the bug
panic: runtime error: integer divide by zero

goroutine 392 [running]:
github.com/grafana/tempo/tempodb.(*timeWindowBlockSelector).windowForTime(...)
/root/tempo/tempodb/compaction_block_selector.go:182
github.com/grafana/tempo/tempodb.newTimeWindowBlockSelector({0xc00000e938, 0x1, 0x1}, 0x0, 0x0, 0x0, 0x2, 0x4)
/root/tempo/tempodb/compaction_block_selector.go:58 +0x8b2
github.com/grafana/tempo/tempodb.(*readerWriter).doCompaction(0xc0001f4600)
/root/tempo/tempodb/compactor.go:92 +0x18c
github.com/grafana/tempo/tempodb.(*readerWriter).compactionLoop(0xc0001f4600)
/root/tempo/tempodb/compactor.go:74 +0x96
created by github.com/grafana/tempo/tempodb.(*readerWriter).EnableCompaction
/root/tempo/tempodb/tempodb.go:387 +0x222

To Reproduce
No Idea

Expected behavior
Well, not a divide by zero I guess.

Environment:

  • Infrastructure: Docker on z/OS (s390x)
  • Deployment tool: manual

Additional Context
v1.5.0

@JavaPerformance
Copy link
Author

Speculatively I changed compaction_cycle to 5m and the problem seems to have gone away

@mdisibio
Copy link
Contributor

Hi thanks for reporting this. Inspecting the source it looks like a configuration issue with compaction_window. I am able to reproduce the divide by zero error when setting compaction_window: 0.

Zero is not valid, it must be a positive time interval and typical values are between 2m and 1h. Can you confirm your compactor settings and see if compaction_window is set? I wouldn't expect compaction_cycle: 5m to fix the issue but it would take 5 minutes longer before the error appears. (Note - 5m is very large for compaction_cycle and will likely cause issues, I would leave it at the default 30s or shorter)

We can check for compaction_window: 0 on startup and log a warning at least.

@JavaPerformance
Copy link
Author

JavaPerformance commented Aug 22, 2022

Hi, well this is interesting, If I remove compaction_window it properly defaults to 1h0m0s however if I remove both, and therefore default, compaction_window and max_block_bytes then compaction_window defaults to 0s ... looks like this is why I saw the problem in the first place. In fact everything defaults to 0 when looking on the status page:

    compaction:
        chunk_size_bytes: 0
        flush_size_bytes: 0
        compaction_window: 0s
        max_compaction_objects: 0
        max_block_bytes: 0
        block_retention: 0s
        compacted_block_retention: 0s
        retention_concurrency: 0
        iterator_buffer_size: 0
        max_time_per_tenant: 0s
        compaction_cycle: 0s

To cause this all I had in my config file for the compactor was this, probably invalid, yaml

compactor:
  compaction:

@github-actions
Copy link
Contributor

This issue has been automatically marked as stale because it has not had any activity in the past 60 days.
The next time this stale check runs, the stale label will be removed if there is new activity. The issue will be closed after 15 days if there is no new activity.
Please apply keepalive label to exempt this Issue.

@github-actions github-actions bot added the stale Used for stale issues / PRs label Nov 11, 2022
@joe-elliott joe-elliott added good first issue Good for newcomers keepalive Label to exempt Issues / PRs from stale workflow and removed stale Used for stale issues / PRs labels Nov 13, 2022
@mghildiy
Copy link
Contributor

I guess this must be fixed by fix for #2167.

@joe-elliott
Copy link
Member

Yup. thanks for pointing that out @mghildiy 👍

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
good first issue Good for newcomers keepalive Label to exempt Issues / PRs from stale workflow
Projects
None yet
Development

No branches or pull requests

4 participants