-
Notifications
You must be signed in to change notification settings - Fork 9.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Watch: inefficient event distribution #4627
Comments
Let's say we have 1000 clients watch on the same key foo. If there is a change on key foo, we need to broadcast the event to all 1000 clients. The core content, the key change, is same for 1000 clients. Right now, we still allocation space and encode the key changes 1000 times which is inefficient. Ideally, we should just do the key change encoding once. |
@xiang90 Thanks for the explanation ! Could you also point me to the code path? |
Do you have token some measures to resolve such inefficient watch mechanism? |
@shuqilou This can be an easy fix. We have not measure how efficient it is, it is actually not common to have 1000s of watchers watching on the same key. It would be great if you can:
|
I talked this issue with @awalterschulze : here is his suggestion
Also
type cached struct {
WatchResponse *WatchResponse
cache []byte
}
func (this *cached) Marshal() ([]byte, error) {
// do the marshaling in the way you want by calling proto.Marshal / MarshalTo in appropriate ways
}
func (this *cached) IsProto() {}
|
@mqliang @sinsharat @ajityagaty @shuqilou Hope this can help to explain this issue more. |
move to 3.3. with the help of gRPC proxy, watch load is not a huge issue. |
after second thought, we should just close this one. gRPC proxy should just solve this problem. |
Separated from #3848.
The text was updated successfully, but these errors were encountered: