Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[self-hosted] cannot initialize workspace: cannot connect to workspace daemon #2935

Closed
jgallucci32 opened this issue Jan 15, 2021 · 3 comments

Comments

@jgallucci32
Copy link
Contributor

jgallucci32 commented Jan 15, 2021

Describe the bug

Gitpod cannot initial workspace with the following error

cannot initialize workspace: cannot connect to workspace daemon; last backup failed: cannot connect to workspace daemon. Please contact support if you need the workspace data.

Further investigation of the ws-manager logs show the workspace launching then immediately shutting down

{"@type":"type.googleapis.com/google.devtools.clouderrorreporting.v1beta1.ReportedErrorEvent","addr":"172.31.4.65:8080","error":"context deadline exceeded","level":"error","message":"cannot connect to ws-daemon","serviceContext":{"service":"ws-manager","version":""},"severity":"ERROR","time":"2021-01-15T14:46:53Z"}
1/15/2021 6:46:53 AM {"instanceId":"17dc4ed7-ef4b-4932-8966-45678e3ae23d","level":"error","message":"workspace failed","serviceContext":{"service":"ws-manager","version":""},"severity":"ERROR","status":{"id":"17dc4ed7-ef4b-4932-8966-45678e3ae23d","metadata":{"owner":"91ebac6c-09e0-424c-bfab-5a6996a4bbb4","meta_id":"a4224e90-62ec-49b0-abc8-9ba7ca29960b","started_at":{"seconds":1610721983}},"spec":{"workspace_image":"registry.domain.local/gitpod-test/workspace-images:aaf4855ee28de894ea9a2d2ef7fb018c6655f2b2aa7c66bff2b4a22dbf1b13fa","ide_image":"gcr.io/gitpod-io/self-hosted/theia-ide:0.6.0","url":"https://a4224e90-62ec-49b0-abc8-9ba7ca29960b.ws.gitpod-test.domain.local","timeout":"30m"},"phase":6,"conditions":{"failed":"cannot initialize workspace: cannot connect to workspace daemon; last backup failed: cannot connect to workspace daemon. Please contact support if you need the workspace data.","final_backup_complete":1},"runtime":{"node_name":"euca-172-31-4-65.ec2.internal"},"auth":{"owner_token":"0\u003c!Jp+M3r]-[|M:D^9]|gKD:VoJgX1ZC"}},"time":"2021-01-15T14:46:53Z","userId":"91ebac6c-09e0-424c-bfab-5a6996a4bbb4","workspaceId":"a4224e90-62ec-49b0-abc8-9ba7ca29960b"}

Steps to reproduce

  1. Deploy gitpod self-hosted on vanilla K8s with 3 or more worker nodes
  2. Launch workspace (repeat to ensure it gets placed on different nodes)

NOTE: Because it only fails when components talk across the host network to the ws-daemon, this will succeed occassionally depending on your K8s scheduling and available resources.

Expected behavior

Workspace launches

Additional information

Rancher Kubernetes 1.17.5
Docker CE 19.03.13
Red Hat Enterprise Linux 7.8

This is closely related to #2029

Example repository

n/a

@jgallucci32
Copy link
Contributor Author

This was partially fixed in #2029 by implementing the configuration of dnsPolicy from the Helm chart.

When dnsPolicy: ClusterFirstWithHostNet is configured the pod also needs to have access to the host network. The missing piece is the following still needs to be configured on ws-daemon for this to fully work

hostNetwork: true

Since you do not want to configure the hostNetwork for all pods and only ws-daemon needs it, I recommend implementing this in the Helm template for ws-daemon to set this value if the dnsPolicy is configured to use the host network.

@csweichel
Copy link
Contributor

csweichel commented Jan 18, 2021

I think we've been moving a bit too fast here. ws-daemon should never have to run with hostNetwork: true, just because ws-manager needs to talk to it. ws-daemon is far too privileged as it stands, and I'd prefer to keep things as close to "regular" Kubernetes networking as we can. If we "got lucky" and Calico just happens to allow pod/nodePort talk, but that's non-standard and other CNI's don't support it, then we must look for alternatives.

I've written up the search for alternatives here: #2956 and would like to close this issue in its favour. Please re-open if you disagree.

@jgallucci32
Copy link
Contributor Author

@csweichel I agree. I would rather have the pod/nodePort communication use native/standard configurations than further escalate the pod. The default networking provider for RKE is Canal which is what we are using in our clusters which explains the slight differences I have seen here (and some in the past) when deploying Gitpod.

This does seem to be a topic of discussion for Canal projectcalico/canal#31 so I don't think it is much a luck as it is tuning the CNIs correctly to achieve this functionality. I'll do some research into the configuration of the Rancher cluster to see if this can be achieved with a setting change at the cluster level.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants