You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanos, Prometheus and Golang version used:
Thanos: 0.9.0
Go: 1.13.1
Prometheus: 2.14
Object Storage Provider: GCS
What happened:
Our index.cache.json files for a couple of days are being created greater than the normal index files. This is expected?
Comparing with 5 days agor we have index files of 1G and 300/400MB cache.json. No we have index with 1.8G and caches with 1.7.
I think that the big files are responsaible for the actual overload in thanos-store that we have, because only when we configure min-size and max-size for the correspondent days that we have the bigger cache they crash.
The compactor is running without errors.
What you expected to happen:
The previous behavior: Cache files way small than index files
Anything else we need to know:
The text was updated successfully, but these errors were encountered:
Yes, the reason behind that is that index.cache.json is unoptimized e.g strings are not interned. With huge number of labels and long strings this might be the case, but your case looks really extreme.
We are working on this to resolve your issue: #1839
I see.
Thanks for the responde @bwplotka, I'll be watching this issue..
With the new Prometheus 2.14 we saw isome Highest Cardinality Metric Names and Highest Cardinality Labels who may impact in this matter. Those metrics are from home-made exporters and we are working on it.
But something is odd because those metrics are there for a long time already. May you have some tips about where to look?
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
Thanos, Prometheus and Golang version used:
Thanos: 0.9.0
Go: 1.13.1
Prometheus: 2.14
Object Storage Provider: GCS
What happened:
Our
index.cache.json
files for a couple of days are being created greater than the normal index files. This is expected?Comparing with 5 days agor we have index files of 1G and 300/400MB
cache.json
. No we have index with 1.8G and caches with 1.7.I think that the big files are responsaible for the actual overload in
thanos-store
that we have, because only when we configuremin-size
andmax-size
for the correspondent days that we have the bigger cache they crash.The compactor is running without errors.
What you expected to happen:
The previous behavior: Cache files way small than index files
Anything else we need to know:
The text was updated successfully, but these errors were encountered: