-
Notifications
You must be signed in to change notification settings - Fork 620
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[cinder-csi-plugin] Allow disabling particular driver services #1021
Conversation
Welcome @rfranzke! |
Hi @rfranzke. Thanks for your PR. I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/assign @chrigl |
Build succeeded.
|
Build succeeded.
|
Build succeeded.
|
Build succeeded.
|
/ok-to-test |
cinder-csi-plugin already supports this without this PR, it's running a gPRC server with all the interfaces implemented, we don't need to add two extra parameters explicitly. /hold |
@jianglingxia Can you explain more? As of today, when starting the cinder CSI driver it unconditionally sets up both the controller and the node server, see https://github.com/kubernetes/cloud-provider-openstack/blob/master/pkg/csi/cinder/driver.go#L138-L144. I don't see how is it supported already today? |
The code you mentioned is just registering services and its implementation to the gRPC server, they are not the 'real' server running behind, after registering, you still have only one server running, but can serve different types of gRPC calling. What problem do you want to solve? What problems are you facing currently? If you do see an issue caused by the current implementation, it would be great if you could submit an issue before actually submitting a PR. |
This is understood. However, I want to optionally enable that either the controller or the node service can be disabled from being registered to the gRPC server.
I added docs and a description for the problem in the PR. Concretely, I want to enable the use-case where the CSI controllers are NOT running inside the cluster they are serving but outside in a different cluster. Imagine cluster Also, it should be made possible that cluster
The above mentioned scenarios are not possible to run today because it's not possible to cleanly separate the driver services.
Well, if you prefer I can open an issue now and describe the above problems + reference this PR as solution for it. Shall I really do it now that we already have this discussion thread + a proposal for a fix? FYI: We have done similar enhancements to the AWS EBS CSI driver (kubernetes-sigs/aws-ebs-csi-driver#438) and to the GCE PD CSI driver (kubernetes-sigs/gcp-compute-persistent-disk-csi-driver#449) already. |
Hi @rfranzke I'm still confused, I totally understand want you want do do with this PR, but I don't understand why you can't run controller services and node services separately? If you look at https://github.com/kubernetes/cloud-provider-openstack/tree/master/manifests/cinder-csi-plugin, as example, we are actually running controller and node plugin at different places using different yaml files. Maybe I'm wrong or I didn't fully understand you use case, but can you paste some error logs with your preferable deployment methods? If it's possible, please create an issue with the steps and error log, we could move discussion there. Thanks for your patience. @ramineni @jichenjc @chrigl Please also join us for this discussion if you could. |
Controller service doesnt depend on Openstack metatdata service, it never queries it as per https://github.com/kubernetes/cloud-provider-openstack/blob/master/pkg/csi/cinder/controllerserver.go
As per current implementation, even if we have both node service and controller service enabled in plugin, unless we run node-driver-registrar sidecar, NodeGetInfo, NodeStageVolume, NodePublishVolume are not possible , so no effect of node service if only controller plugin manifest is deployed on the node ? If the above doesnt solve your usecase, could you elaborate on issues you are facing with both enabled and having different manifests doesnt solve your usecase |
@lingxiankong @ramineni Thanks for your views! I was trying it out again and thoroughly followed the example manifests, and indeed, it works as you expected. Probably I was working with outdated manifests earlier or did something else wrong. Anyways, I guess the PR can be closed then as the scenario is already supported. Thanks again for your help and sorry for taking your time with this. /close |
@rfranzke thanks for the followup |
What this PR does / why we need it:
This PR adds two new command line flags
--run-controller-service
and--run-node-service
(both default totrue
) which allow to particularly disable individual services if not required.This enables the use-case of running the CSI controllers (csi-provisioner, csi-attacher, etc., + the driver controller) separately outside of the cluster they are serving, and not necessarily on an OpenStack nova VM. The node service part of the CSI driver will still inside the cluster for mounting/unmounting the volume, etc.
Release note: