You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, we treat certain blocks in Compactor especially by not applying consistency delay. (compacted, and blocks from repair). This is based on block source. This will not work well for eventually consistent object storages so we need to solve this as well. This was also mentioned as a side task for https://thanos.io/proposals/201901-read-write-operations-bucket.md/
Potentially we can solve it by saving successful compactor block uploads to persistent file, same as shipper to avoid unnecessary duplicated compactions/downsamplings. And we detect when not to compact for missing bits etc, if those are still in an inconsistent state. We need to prepare that when persistent storage is gone, we need to reconcile if we do double compaction etc.
This issue/PR has been automatically marked as stale because it has not had recent activity. Please comment on status otherwise the issue will be closed in a week. Thank you for your contributions.
Hello 👋 Looks like there was no activity on this issue for last 30 days. Do you mind updating us on the status? Is this still reproducible or needed? If yes, just comment on this PR or push a commit. Thanks! 🤗
If there will be no activity for next week, this issue will be closed (we can always reopen an issue if we need!). Alternatively, use remind command if you wish to be reminded at some point in future.
Currently, we treat certain blocks in Compactor especially by not applying consistency delay. (compacted, and blocks from repair). This is based on block source. This will not work well for eventually consistent object storages so we need to solve this as well. This was also mentioned as a side task for https://thanos.io/proposals/201901-read-write-operations-bucket.md/
Potentially we can solve it by saving successful compactor block uploads to persistent file, same as shipper to avoid unnecessary duplicated compactions/downsamplings. And we detect when not to compact for missing bits etc, if those are still in an inconsistent state. We need to prepare that when persistent storage is gone, we need to reconcile if we do double compaction etc.
cc @khyatisoneji
The text was updated successfully, but these errors were encountered: