-
Notifications
You must be signed in to change notification settings - Fork 9.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
etcdctl watch to a etcd in proxy mode with TLS to cluster fails #3894
Comments
@markhowells I'm in the same situation any news? |
@bkleef No progress at all I'm afraid. It's very frustrating for those of us who need to use TLS across Internet connected hosts.All my apps need to be TLS enabled and one I want to use (registrator - https://github.com/gliderlabs/registrator ) isn't... |
Can someone look into this? Maybe @gyuho @heyitsanthony |
@markhowells Sorry for delay and thanks for the detailed report. I just reproduced the same For reference, here's how I reproduced:
Command: ./etcdctl --debug --endpoint http://localhost:2379 --no-sync set a b
./etcdctl --debug --endpoint http://localhost:2379 --no-sync get a
./etcdctl --debug --endpoint http://localhost:2379 watch a # no crash
./etcdctl --debug --endpoint http://localhost:2379 --no-sync watch a
# returns
Error: client: etcd cluster is unavailable or misconfigured
error #0: client: endpoint http://localhost:2379 exceeded header timeout
# error message in server
2016-01-20 20:29:05.152250 I | proxy: client 127.0.0.1:41196 closed request prematurely |
Current V2 watch waits by encoding URL with wait=true. When a client sets 'no-sync', it requests directly to proxy and the proxy redirects it by cloning the request object, which leads to cancel the original request when it times out and the cloned request gets closed prematurely. This fixes etcd-io#3894 by querying the original client request in order to not use context timeout when 'wait=true'.
Current V2 watch waits by encoding URL with wait=true. When a client sets 'no-sync', it requests directly to proxy and the proxy redirects it by cloning the request object, which leads to cancel the original request when it times out and the cloned request gets closed prematurely. This fixes etcd-io#3894 by querying the original client request in order to not use context timeout when 'wait=true'.
Current V2 watch waits by encoding URL with wait=true. When a client sets 'no-sync', it requests directly to proxy and the proxy redirects it by cloning the request object, which leads to cancel the original request when it times out and the cloned request gets closed prematurely. This fixes etcd-io#3894 by querying the original client request in order to not use context timeout when 'wait=true'.
Current V2 watch waits by encoding URL with wait=true. When a client sets 'no-sync', it requests directly to proxy and the proxy redirects it by cloning the request object, which leads to cancel the original request when it times out and the cloned request gets closed prematurely. This fixes etcd-io#3894 by querying the original client request in order to not use context timeout when 'wait=true'.
@gyuho So this is not related to TLS? |
This is not related to TLS. I see the same behavior without TLS.
This Procfile has the same behavior with |
Current V2 watch waits by encoding URL with wait=true. When a client sets 'no-sync', it requests directly to proxy and the proxy redirects it by cloning the request object, which leads to cancel the original request when it times out and the cloned request gets closed prematurely. This fixes etcd-io#3894 by querying the original client request in order to not use context timeout when 'wait=true'.
Current V2 watch waits by encoding URL with wait=true. When a client sets 'no-sync', it requests directly to proxy and the proxy redirects it by cloning the request object, which leads to cancel the original request when it times out and the cloned request gets closed prematurely. This fixes etcd-io#3894 by querying the original client request in order to not use context timeout when 'wait=true'.
Current V2 watch waits by encoding URL with wait=true. When a client sets 'no-sync', it requests directly to proxy and the proxy redirects it by cloning the request object, which leads to cancel the original request when it times out and the cloned request gets closed prematurely. This fixes etcd-io#3894 by querying the original client request in order to not use context timeout when 'wait=true'.
Current V2 watch waits by encoding URL with wait=true. When a client sets 'no-sync', it requests directly to proxy and the proxy redirects it by cloning the request object, which leads to cancel the original request when it times out and the cloned request gets closed prematurely. This fixes coreos#3894 by querying the original client request in order to not use context timeout when 'wait=true'.
@markhowells Please try again after updating your client library. The fix #4254 has just been merged to master branch. I manually tested the failing case and confirmed that it is fixed. Please let me know if you still have issues. Thanks, |
Current V2 watch waits by encoding URL with wait=true. When a client sets 'no-sync', it requests directly to proxy and the proxy redirects it by cloning the request object, which leads to cancel the original request when it times out and the cloned request gets closed prematurely. This fixes coreos#3894 by querying the original client request in order to not use context timeout when 'wait=true'.
Current V2 watch waits by encoding URL with wait=true. When a client sets 'no-sync', it requests directly to proxy and the proxy redirects it by cloning the request object, which leads to cancel the original request when it times out and the cloned request gets closed prematurely. This fixes coreos#3894 by querying the original client request in order to not use context timeout when 'wait=true'.
Current V2 watch waits by encoding URL with wait=true. When a client sets 'no-sync', it requests directly to proxy and the proxy redirects it by cloning the request object, which leads to cancel the original request when it times out and the cloned request gets closed prematurely. This fixes coreos#3894 by querying the original client request in order to not use context timeout when 'wait=true'.
Current V2 watch waits by encoding URL with wait=true. When a client sets 'no-sync', it requests directly to proxy and the proxy redirects it by cloning the request object, which leads to cancel the original request when it times out and the cloned request gets closed prematurely. This fixes coreos#3894 by querying the original client request in order to not use context timeout when 'wait=true'.
Is this a bug?
I have an etcd proxy running in a docker container like so
and the ports 2379 and 2380 are mapped to the docker0 interface on the host.
The proxy comes up in the container
The main cluster itself is secured using TLS. Now, when I execute a get on a key via the proxy from a container
data is returned as expected. The cluster TLS is terminated at the proxy and the proxy requests the data via the TLS connection. etcdctl requires the --no-sync to suppress direct connections to the cluster.
However, if I execute a watch
then etcdctl returns
and my ectd proxy
Can I set up my proxy to terminate TLS and offer HTTP access within the host or is this a bug?
The text was updated successfully, but these errors were encountered: