Skip to content
This repository has been archived by the owner on Sep 7, 2022. It is now read-only.

WIP: Address lack of support within Kubernetes to consume vSphere managed storage #1

Closed
kerneltime opened this issue Jan 22, 2016 · 3 comments

Comments

@kerneltime
Copy link

Objective

There is no mechanism to specify storage managed by vSphere to be consumed by a Pod. The purpose of this document is to describe a viable short term solution to address needs of customers running ESX and wanting to consume storage in K8S.

The reason this is short term for now is 2 fold:

  • External: We need more clarity on the long term direction of storage management in kubernetes. The state of storage management is evolving and might undergo fundamental changes.
  • Internal: We need more clarity on how to coexist and best surface capabilities part of ESX/vCenter/Photon Controller.

Proposal

Implement a K8S volume plugin based on the plugin framework for the last stable release in the forked repo. This will unblock customers wanting to try K8S + vSphere + Storage. Going forward based on the outcome of kubernetes#18333 the plugin will be updated.

  • Plugin will be part of the k8s code.
  • The base image authored by VMware used to install k8s on top of vSphere will include an additional daemon that will service control plane requests from the plugins [1]
  • The ESX user space will require a VIB to be installed on all nodes to allow for control operations to be executed. [1]
  • The creation and deletion of volumes needs to be done via a CLI tool installed on the master.
  • Volumes can be mounted and detached from any of the nodes based on the description for the Pod.

Alternative

We ship a binary plugin and depend on FlexVolume framework (Experimental for now and only in the master, no stable release). This allows us to not be dependent on getting our code into the official repo.

Cons

  • Existing customers of kubernetes running stable releases (at the time of writing this issue) will not have any mechanism to consume vSphere storage other than it being statically attached to nodes.
  • Future directional changes by kubernetes could make the plugin work throwaway.

[1] The code external to the K8S plugin and is common with the docker plugin.

@wallnerryan
Copy link

If you use Flocker as a k8s plugin you can consume vsphere volumes via https://github.com/vmware/vsphere-flocker-driver

@kerneltime
Copy link
Author

Yup, one of the options available is to use the flocker driver, in addition we also plan to have native integration with vSphere.

@vipulsabhaya
Copy link

We've completed the vSphere volume plugin, should land in 1.3. kubernetes#23932

BaluDontu pushed a commit that referenced this issue Dec 8, 2016
divyenpatel pushed a commit that referenced this issue Jan 19, 2017
Automatic merge from submit-queue

Add rule for detecting exceptions to fluentd config for GKE logging (#1)

**What this PR does / why we need it**:
Add the [fluent-detect-exceptions-plugin](https://rubygems.org/gems/fluent-plugin-detect-exceptions) version 0.0.4 to the fluentd config for Kubernetes clusters running on Google Cloud. This plugin detects exception stacks in the stdout/stderr log streams of the containers that run in the cluster and makes sure that they are forwarded as a single log entry to Stackdriver Logging.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

**Special notes for your reviewer**:

**Release note**:

```release-note
fluentd config for GKE clusters updated: detect exceptions in container log streams and forward them as one log entry.
```
BaluDontu pushed a commit that referenced this issue May 24, 2017
…mance

Automatic merge from submit-queue (batch tested with PRs 38505, 41785, 46315)

Only retrieve relevant volumes

**What this PR does / why we need it**:

Improves performance for Cinder volume attach/detach calls. 

Currently when Cinder volumes are attached or detached, functions try to retrieve details about the volume from the Nova API. Because some only have the volume name not its UUID, they use the list function in gophercloud to iterate over all volumes to find a match. This incurs severe performance problems on OpenStack projects with lots of volumes (sometimes thousands) since it needs to send a new request when the current page does not contain a match. A better way of doing this is use the `?name=XXX` query parameter to refine the results.

**Which issue this PR fixes**:

kubernetes#26404

**Special notes for your reviewer**:

There were 2 ways of addressing this problem:

1. Use the `name` query parameter
2. Instead of using the list function, switch to using volume UUIDs and use the GET function instead. You'd need to change the signature of a few functions though, such as [`DeleteVolume`](https://github.com/kubernetes/kubernetes/blob/master/pkg/volume/cinder/cinder.go#L49), so I'm not sure how backwards compatible that is.

Since #1 does effectively the same as #2, I went with it because it ensures BC.

One assumption that is made is that the `volumeName` being retrieved matches exactly the name of the volume in Cinder. I'm not sure how accurate that is, but I see no reason why cloud providers would want to append/prefix things arbitrarily. 

**Release note**:
```release-note
Improves performance of Cinder volume attach/detach operations
```
BaluDontu pushed a commit that referenced this issue Jun 17, 2017
Automatic merge from submit-queue (batch tested with PRs 47523, 47438, 47550, 47450, 47612)

Move slow PV test to slow suite.

See [testgrid](https://k8s-testgrid.appspot.com/google-gce#gce&width=5&graph-metrics=test-duration-minutes).

#1
ashahi1 pushed a commit that referenced this issue Feb 15, 2018
ashahi1 pushed a commit that referenced this issue Feb 27, 2018
ashahi1 pushed a commit that referenced this issue Sep 26, 2018
ashahi1 pushed a commit that referenced this issue Jan 1, 2019
update from kubernetes master
ashahi1 pushed a commit that referenced this issue Feb 11, 2019
update from base repository
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants