You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
What about the average traffic usage difference? It's hard to tell what's happening just from usage graphs. I'm 99% sure that compression works because with --grpc-compression=snappy my project https://github.com/GiedriusS/thanos-rust doesn't work (hyperium/tonic#282):
Error executing query: proxy Series(): rpc error: code = Aborted desc = receive series from Addr: 127.0.0.1:50051 LabelSets: {dc="hx", prometheus_node_id="5"} Mint: -9223372036854775808 Maxt: 9223372036854775807: rpc error: code = Unimplemented desc = Message compressed, compression support not enabled.
Perhaps your traffic consists of a lot of unique labels or you have lots of different, small queries hence there's no obvious effect.
Version:
Thanos: v0.29.0-rc.0
Image: thanosio/thanos:v0.29.0-rc.0
What happened:
I did a grpc compression test with the latest version of the image, trying to reduce the transmission traffic.
Data flow diagram:
store(grpc) -> query1(grpc) -> query2(http) -> queryFrontend
query1 traffic map:
yellow: with
--grpc-compression=snappy
green: with
--grpc-compression=none
query2 traffic map:
yellow: with
--grpc-compression=snappy
green: with
--grpc-compression=none
**What you expected to happen:
The bandwidth in query1 & query2 should be reduced after turning on grpc compression.
**Details in args
store:
receive --log.level=info --log.format=logfmt --grpc-address=0.0.0.0:10901 --http-address=0.0.0.0:10902 --remote-write.address=0.0.0.0:19291 --objstore.config=$(OBJSTORE_CONFIG) --tsdb.path=/var/thanos/receive --label=thanosreplica="$(NAME)" --label=receive="true" --tsdb.retention=1d --receive.local-endpoint="$(ENDPOINT)"
query1:
query --log.level=info --log.format=logfmt --grpc-address=0.0.0.0:10901 --http-address=0.0.0.0:10902 --endpoint="$(ENDPOINT)" --query.auto-downsampling --query.default-step=30s --query.metadata.default-time-range=5m --query.max-concurrent=100 --query.max-concurrent-select=20 --grpc-compression=snappy
query2:
query --log.level=info --log.format=logfmt --grpc-address=0.0.0.0:10901 --http-address=0.0.0.0:10902 --endpoint="$(QUERY1)" --query.auto-downsampling --query.default-step=30s --query.metadata.default-time-range=5m --query.max-concurrent=100 --query.max-concurrent-select=20 --grpc-compression=snappy
frontend:
query-frontend --log.level=info --log.format=logfmt --web.disable-cors --http-address=0.0.0.0:10902 --query-frontend.compress-responses --query-frontend.downstream-url="$(QUERY2)" - |- --query-frontend.downstream-tripper-config= "max_idle_conns_per_host": 500 "idle_conn_timeout": "5m" --query-range.split-interval=3h --query-range.align-range-with-step --query-range.max-query-parallelism=56 --query-range.max-retries-per-request=5 --query-range.response-cache-max-freshness=15s --query-range.response-cache-config-file=/conf/cache/config.yml --labels.default-time-range=30m --labels.split-interval=3h --labels.partial-response --labels.response-cache-config-file=/conf/cache/config.yml --labels.max-retries-per-request=5 --labels.response-cache-max-freshness=15s
The text was updated successfully, but these errors were encountered: