Hi! Here is the first implementation (for testing purpose) of a Docker Swarm infrastructure with persistent volumes on OVH Public Cloud. Due to Mesos/Marathon stack end of life.
For this test we will use:
- 1 private network (Docker-Swarm-Prod)
- This network is for the Swarm communication
- All instances are linked to this network
- Subnet: 192.168.0.0/24
- 2 instances on CentOS 7
- 1 manager
- Network:
- eth0: public IP address
- eth1: 192.168.0.10
- Network:
- 1 worker
- Network:
- eth0: public IP address
- eth1: 192.168.0.20
- Network:
- 1 manager
yum -y update
Verify and configure (if needed) the network
- /etc/sysconfig/network-scripts/ifcfg-eth0
- /etc/sysconfig/network-scripts/ifcfg-eth1
echo "ZONE=public" >> /etc/sysconfig/network-scripts/ifcfg-eth0
echo "ZONE=internal" >> /etc/sysconfig/network-scripts/ifcfg-eth1
systemctl restart network
firewall-cmd --get-active-zones
firewall-cmd --zone=public --list-all
firewall-cmd --zone=internal --permanent --remove-service=mdns
firewall-cmd --zone=internal --permanent --remove-service=samba-client
firewall-cmd --zone=internal --permanent --remove-service=dhcpv6-client
firewall-cmd --reload
firewall-cmd --zone=internal --list-all
(Optional) you can remove ssh access from the public IP address if you can access it from the private network
firewall-cmd --zone=public --permanent --remove-service=ssh
firewall-cmd --reload
On the public zone we allow http/https by default but you can add your services ports here
firewall-cmd --zone=public --permanent --add-service=http
firewall-cmd --zone=public --permanent --add-service=https
firewall-cmd --reload
firewall-cmd --zone=public --list-all
On the private zone we allow ports for Swarm's management
firewall-cmd --zone=internal --permanent --add-port=2376/tcp
firewall-cmd --zone=internal --permanent --add-port=2377/tcp
firewall-cmd --zone=internal --permanent --add-port=7946/tcp
firewall-cmd --zone=internal --permanent --add-port=7946/udp
firewall-cmd --zone=internal --permanent --add-port=4789/udp
firewall-cmd --reload
firewall-cmd --zone=internal --list-all
yum install -y yum-utils
yum-config-manager \
--add-repo \
https://download.docker.com/linux/centos/docker-ce.repo
yum install -y docker-ce
systemctl enable docker
systemctl start docker
docker swarm init --advertise-addr 192.168.0.10
(For reminder) You can use these command on the manager to retrieve the tokens for adding another manager or worker node
docker swarm join-token manager
docker swarm join-token worker
docker swarm join --token <token> 192.168.0.10:2377
See the status of your pool
docker node ls
Here you need to change some information:
- Openstack Username
- Openstack Password
- Openstack Tenant
- Openstack Tenant Name
- Openstack Region
You can find these information inside the OpenRC file. You can follow this guide to download the OpenRC file: https://docs.ovh.com/fr/public-cloud/charger-les-variables-denvironnement-openstack/
docker plugin install --grant-all-permissions rexray/cinder:edge \
CINDER_AUTHURL=https://auth.cloud.ovh.net/v2.0/ \
CINDER_USERNAME=<Openstack Username> \
CINDER_PASSWORD=<Openstack Password> \
CINDER_TENANTID=<Openstack Tenant> \
CINDER_TENANTNAME=<Openstack Tenant Name> \
CINDER_REGIONNAME=<Openstack Region e.i: GRA3> \
CINDER_AVAILABILITYZONENAME=nova \
REXRAY_FSTYPE=ext4 \
REXRAY_PREEMPT=true
docker volume create --name docker-swarm-prod-pg_data --driver rexray/cinder:edge --opt size=5
You can see the volume on each swarm node
docker volume ls
docker service create --replicas 1 --name pg -e POSTGRES_PASSWORD=mysecretpassword --mount type=volume,source=docker-swarm-prod-pg_data,target=/var/lib/postgresql/data,volume-driver=rexray/cinder:edge --constraint 'node.role==worker' postgres
This container will start on the worker node (due to the constraint)
On the worker, you can see the container with this command
docker ps
Also in the OVH interface (the 5go attached hard drive)
On the worker node, connect to the container and create a new database
docker exec -ti <name> sh
su - postgres
psql
CREATE DATABASE testvolumepersistance;
\l
\q
exit
exit
On the manager, we remove the placement constraint of the container (because we only have 1 worker)
docker service update --constraint-rm node.role==worker pg
On the worker, we stop docker
systemctl stop docker
On the manager we can see the status of the pool
docker node ls
We can also see the status of the service
docker service ps pg
Our service has been restart to the manager node.
We can see in the OVH Interface, our volume has been attached to the manager
What about the data... Connect to the container on the manager node
docker ps
docker exec -ti <name> sh
su - postgres
psql
\l
\q
exit
exit
As we can see, our test database is here!