This repository has been archived by the owner on Sep 7, 2022. It is now read-only.
forked from kubernetes/kubernetes
-
Notifications
You must be signed in to change notification settings - Fork 31
WIP: Address lack of support within Kubernetes to consume vSphere managed storage #1
Comments
If you use Flocker as a k8s plugin you can consume vsphere volumes via https://github.com/vmware/vsphere-flocker-driver |
Yup, one of the options available is to use the flocker driver, in addition we also plan to have native integration with vSphere. |
We've completed the vSphere volume plugin, should land in 1.3. kubernetes#23932 |
BaluDontu
pushed a commit
that referenced
this issue
Dec 8, 2016
divyenpatel
pushed a commit
that referenced
this issue
Jan 19, 2017
Automatic merge from submit-queue Add rule for detecting exceptions to fluentd config for GKE logging (#1) **What this PR does / why we need it**: Add the [fluent-detect-exceptions-plugin](https://rubygems.org/gems/fluent-plugin-detect-exceptions) version 0.0.4 to the fluentd config for Kubernetes clusters running on Google Cloud. This plugin detects exception stacks in the stdout/stderr log streams of the containers that run in the cluster and makes sure that they are forwarded as a single log entry to Stackdriver Logging. **Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes # **Special notes for your reviewer**: **Release note**: ```release-note fluentd config for GKE clusters updated: detect exceptions in container log streams and forward them as one log entry. ```
BaluDontu
pushed a commit
that referenced
this issue
May 24, 2017
…mance Automatic merge from submit-queue (batch tested with PRs 38505, 41785, 46315) Only retrieve relevant volumes **What this PR does / why we need it**: Improves performance for Cinder volume attach/detach calls. Currently when Cinder volumes are attached or detached, functions try to retrieve details about the volume from the Nova API. Because some only have the volume name not its UUID, they use the list function in gophercloud to iterate over all volumes to find a match. This incurs severe performance problems on OpenStack projects with lots of volumes (sometimes thousands) since it needs to send a new request when the current page does not contain a match. A better way of doing this is use the `?name=XXX` query parameter to refine the results. **Which issue this PR fixes**: kubernetes#26404 **Special notes for your reviewer**: There were 2 ways of addressing this problem: 1. Use the `name` query parameter 2. Instead of using the list function, switch to using volume UUIDs and use the GET function instead. You'd need to change the signature of a few functions though, such as [`DeleteVolume`](https://github.com/kubernetes/kubernetes/blob/master/pkg/volume/cinder/cinder.go#L49), so I'm not sure how backwards compatible that is. Since #1 does effectively the same as #2, I went with it because it ensures BC. One assumption that is made is that the `volumeName` being retrieved matches exactly the name of the volume in Cinder. I'm not sure how accurate that is, but I see no reason why cloud providers would want to append/prefix things arbitrarily. **Release note**: ```release-note Improves performance of Cinder volume attach/detach operations ```
BaluDontu
pushed a commit
that referenced
this issue
Jun 17, 2017
Automatic merge from submit-queue (batch tested with PRs 47523, 47438, 47550, 47450, 47612) Move slow PV test to slow suite. See [testgrid](https://k8s-testgrid.appspot.com/google-gce#gce&width=5&graph-metrics=test-duration-minutes). #1
2 tasks
This was referenced Sep 18, 2018
ashahi1
pushed a commit
that referenced
this issue
Sep 26, 2018
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Objective
There is no mechanism to specify storage managed by vSphere to be consumed by a Pod. The purpose of this document is to describe a viable short term solution to address needs of customers running ESX and wanting to consume storage in K8S.
The reason this is short term for now is 2 fold:
Proposal
Implement a K8S volume plugin based on the plugin framework for the last stable release in the forked repo. This will unblock customers wanting to try K8S + vSphere + Storage. Going forward based on the outcome of kubernetes#18333 the plugin will be updated.
Alternative
We ship a binary plugin and depend on FlexVolume framework (Experimental for now and only in the master, no stable release). This allows us to not be dependent on getting our code into the official repo.
Cons
[1] The code external to the K8S plugin and is common with the docker plugin.
The text was updated successfully, but these errors were encountered: