You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
"Unprivileged mode" can be interpreted as a continuum. A simpler case is running and configuring ClusterLink the context of a single namespace deployment ("cluster" admin and application admin are the same principle). A more complex solution is needed if CRDs can't be installed at all. This could mean that CL runs in two management modes (with and without CRDs) that have considerable code changes and duplication between them, increasing maintenance effort.
A possible workaround is to run a minimal k8s distribution as the management place solely for CRD's. For example, k3s in agentless mode (embedded no CNI, Kubelet, etc.). The pod(s) can run as a deployment in HA mode or single node mode. We don't expect this to be a common use case so minimizing code investment is beneficial. An alternative would run the API (k3s) as another container in the ControlPlane Pods.
The additional API endpoint runs and is available in the local namespace only and used by the CL controllers. The customer would need to set up a network path to the internal API endpoint (e.g., kubectl proxy or Ingress) so they can set up CRDs directly). This can be hidden behind a CLI that does the set up and knows to dispatch commands.
Each controller would then have two k8s clients: one for CRDs (in the local namespace) and one for objects for the real cluster (e.g., Services, Pods, Secrets, ...)
For regular mode (with cluster installed CRDs) we would have both clients point to the cluster API endpoint. In namespace installed CRDs each client runs with a different kubeconfig.
"Unprivileged mode" can be interpreted as a continuum. A simpler case is running and configuring ClusterLink the context of a single namespace deployment ("cluster" admin and application admin are the same principle). A more complex solution is needed if CRDs can't be installed at all. This could mean that CL runs in two management modes (with and without CRDs) that have considerable code changes and duplication between them, increasing maintenance effort.
A possible workaround is to run a minimal k8s distribution as the management place solely for CRD's. For example, k3s in agentless mode (embedded no CNI, Kubelet, etc.). The pod(s) can run as a deployment in HA mode or single node mode. We don't expect this to be a common use case so minimizing code investment is beneficial. An alternative would run the API (k3s) as another container in the ControlPlane Pods.
The additional API endpoint runs and is available in the local namespace only and used by the CL controllers. The customer would need to set up a network path to the internal API endpoint (e.g., kubectl proxy or Ingress) so they can set up CRDs directly). This can be hidden behind a CLI that does the set up and knows to dispatch commands.
Each controller would then have two k8s clients: one for CRDs (in the local namespace) and one for objects for the real cluster (e.g., Services, Pods, Secrets, ...)
For regular mode (with cluster installed CRDs) we would have both clients point to the cluster API endpoint. In namespace installed CRDs each client runs with a different kubeconfig.
CC: @praveingk
The text was updated successfully, but these errors were encountered: