-
Notifications
You must be signed in to change notification settings - Fork 770
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
support for volumes_from docker-compose construct #190
Conversation
4272fd9
to
19b5e6e
Compare
PVC are created with AccessMode "ReadWriteOnce". So they can't be mounted to multiple containers :-( |
@kadel so I tried following in minikube, see if you can re-create it: Start a minikube VM and ssh into it $ minikube start
$ minikube ssh
docker@minikube:~$ mkdir ~/data And create a PV and PVC using following file: $ cat vols.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: vol1
labels:
type: local
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
hostPath:
path: "/home/docker/data"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: myclaim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi and create pods that share same PVC $ cat pods.yaml
apiVersion: v1
kind: Pod
metadata:
name: httpd
spec:
containers:
- image: docker.io/centos/httpd
name: httpd
volumeMounts:
- mountPath: /var/www/html
name: httpd-volume
volumes:
- name: httpd-volume
persistentVolumeClaim:
claimName: myclaim
---
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- image: nginx
name: nginx
volumeMounts:
- mountPath: /mynginx
name: nginx-volume
volumes:
- name: nginx-volume
persistentVolumeClaim:
claimName: myclaim once all started $ kubectl exec -it nginx bash
root@nginx:/# cat > mynginx/index.html
this file is created from nginx container and will be served by httpd get IP addr of pod $ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
httpd 1/1 Running 0 14m 172.17.0.4 minikube
nginx 1/1 Running 0 14m 172.17.0.3 minikube in minikube vm i could see httpd serving that file created from nginx container $ minikube ssh
docker@minikube:~$ curl 172.17.0.4
this file is created from nginx container and will be served by httpd |
Now a user can provide volumes_from to share volumes from other service so here the PVC created for that service will be shared by service calling volumes_from
19b5e6e
to
fd09efe
Compare
OK, I followed your instruction and it worked. But according to docs it shouldn't have :-) 'ReadWriteOnce – the volume can be mounted as read-write by a single node' Otherwise It doesn't make sense, or I don't get it :-( |
Ohhh, I get it now. (maybe 😉 ) it is by single node not single container. This will work on single node cluster. But not on multi-node. I'll have to test this. |
I tested this on multinode cluster and if pods are on different nodes it is not working :-(
|
I have a 2 node 1 master k8s cluster and 1 more machine acting as NFS server out of k8s cluster. these are the pods i am running $ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
httpd 1/1 Running 0 35m 10.246.93.2 kubernetes-node-2
httpd1 1/1 Running 0 4m 10.246.93.4 kubernetes-node-2
httpd2 1/1 Running 0 3m 10.246.93.7 kubernetes-node-2
httpd3 1/1 Running 0 3m 10.246.93.8 kubernetes-node-2
httpd4 1/1 Running 0 2m 10.246.48.3 kubernetes-node-1
httpd5 1/1 Running 0 1m 10.246.48.7 kubernetes-node-1 and here is the pv and pvc i have $ kubectl get pv
NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM REASON AGE
vol1 5Gi RWO Retain Bound default/myclaim 45m
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESSMODES AGE
myclaim Bound vol1 5Gi RWO 42m now on node1 [vagrant@kubernetes-node-1 ~]$ curl 10.246.93.2
hey this should reflect everywhere
[vagrant@kubernetes-node-1 ~]$ curl 10.246.48.3
hey this should reflect everywhere
[vagrant@kubernetes-node-1 ~]$ curl 10.246.93.4
hey this should reflect everywhere
[vagrant@kubernetes-node-1 ~]$ curl 10.246.48.7
hey this should reflect everywhere now on node2 [vagrant@kubernetes-node-2 ~]$ curl 10.246.93.2
hey this should reflect everywhere
[vagrant@kubernetes-node-2 ~]$
[vagrant@kubernetes-node-2 ~]$ curl 10.246.48.3
hey this should reflect everywhere
[vagrant@kubernetes-node-2 ~]$ curl 10.246.93.4
hey this should reflect everywhere
[vagrant@kubernetes-node-2 ~]$ curl 10.246.48.7
hey this should reflect everywhere The only difference is $ cat pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: vol1
labels:
type: local
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
nfs:
path: /shared/kubernetes/web
server: 192.168.121.199 The followed http://severalnines.com/blog/wordpress-application-clustering-using-kubernetes-haproxy-and-keepalived for the setup all the pods are also same just having different names. |
Ok, it works with NFS on multi node. I think that we can marge this, so more people can test it and we can improve later on. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
maybe wait for @ngtuna if he has any feedback on this. |
switch t := obj.(type) { | ||
case *api.ReplicationController: | ||
svcName := t.ObjectMeta.Name | ||
for _, dependentSvc := range komposeObject.ServiceConfigs[svcName].VolumesFrom { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we should make it better by reducing copied block of code
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would appreciate help on this part of code repetition! I also asked about it once on slack!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right. Probably we can use a common function to return a api.PodTemplateSpec struct for all cases. Then you don't have to repeat the for _, dependentSvc...
loop. That's not much but yeah I mean somehow it looks better a little bit.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
because all resources in 4 cases contains api.PodTemplateSpec in their structs
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But here I am adding data to the existing t.Spec.Template.Spec.Volumes
and t.Spec.Template.Spec.Containers[0].VolumeMounts
so returning common blank struct won't help IMO.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK so I will check it and probably make an enhanced PR later. Let's get it merged. Thanks @surajssd 👍
@kadel so we had this discussion about the Also in the end of talk some questions regarding this are asked at [2]. [0] Kubernetes Storage 101; KubeCon EU 2016 https://www.youtube.com/watch?v=ZqTHe6Xj0Ek |
Now a user can provide
volumes_from
to sharevolumes
from other service so here the PVC created for that service will be shared by service callingvolumes_from
.