You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
A longstanding issue with sidecar containers is that using them in batch jobs can prevent completion of the job, because the job pod will not exit until all containers have exited. One approach to handling this is to share the process namespace between containers, and have the primary container terminate all other processes before quitting. There is a longstanding Kubernetes Enhancement Proposal (KEP) that would resolve this issue: Keystone containers but it is unclear when this will be complete.
In the meantime, this feature proposes listening on a local administrative port and allow termination, similar to istio-proxy's /quitquitquit endpoint. As the cloud-sql-proxy typically connects over the network, and the network namespace is shared between containers, this may be a viable approach to resolving this limitation.
Alternatives Considered
Users have a few potential workarounds:
Install cloud_sql_proxy during the container build, packaging the sidecar along with the main application. The disadvantage of this approach is that it couples the lifecycle of the primary service with the proxy, and in this case, it may be preferable to use the Python connector library instead (i.e. if we're willing to change the application, then we don't need a sidecar at all). The disadvantage to this approach is that we'd need to manage the proxy version and keep it updated.
Share the process namespace between containers, then have the primary job terminate all other processes before exiting -- I have not tried this approach, but it probably works?
Use a flag file between processes (shared over an emptyDir volume) and terminate sidecar processes when the file is detected - this requires polling for existence of the file, and seems like a pretty inelegant solution
Another implementation approach might be to have a timeout (e.g. if there are no active connections, then terminate after some configurable interval)
The text was updated successfully, but these errors were encountered:
Thanks for the request, @jawnsy. We're actually planning on adding a /quitquitquit endpoint to v2. It's the next thing on my list and should be available in one of the next few releases.
Feature Description
A longstanding issue with sidecar containers is that using them in batch jobs can prevent completion of the job, because the job pod will not exit until all containers have exited. One approach to handling this is to share the process namespace between containers, and have the primary container terminate all other processes before quitting. There is a longstanding Kubernetes Enhancement Proposal (KEP) that would resolve this issue: Keystone containers but it is unclear when this will be complete.
In the meantime, this feature proposes listening on a local administrative port and allow termination, similar to istio-proxy's /quitquitquit endpoint. As the
cloud-sql-proxy
typically connects over the network, and the network namespace is shared between containers, this may be a viable approach to resolving this limitation.Alternatives Considered
Users have a few potential workarounds:
Another implementation approach might be to have a timeout (e.g. if there are no active connections, then terminate after some configurable interval)
The text was updated successfully, but these errors were encountered: