Skip to content

Wazabiii/swarm-project-on-ovh

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 

Repository files navigation

Docker Swarm on OVH Public Cloud with persistent volume

Hi! Here is the first implementation (for testing purpose) of a Docker Swarm infrastructure with persistent volumes on OVH Public Cloud. Due to Mesos/Marathon stack end of life.

Architecture model

For this test we will use:

  • 1 private network (Docker-Swarm-Prod)
    • This network is for the Swarm communication
    • All instances are linked to this network
    • Subnet: 192.168.0.0/24
  • 2 instances on CentOS 7
    • 1 manager
      • Network:
        • eth0: public IP address
        • eth1: 192.168.0.10
    • 1 worker
      • Network:
        • eth0: public IP address
        • eth1: 192.168.0.20

Architecture model

Installation

On each node as root

Update the server

yum -y update

Network configuration

Verify and configure (if needed) the network

  • /etc/sysconfig/network-scripts/ifcfg-eth0
  • /etc/sysconfig/network-scripts/ifcfg-eth1

Cleanup some firewall rules

echo "ZONE=public" >> /etc/sysconfig/network-scripts/ifcfg-eth0
echo "ZONE=internal" >> /etc/sysconfig/network-scripts/ifcfg-eth1

systemctl restart network

firewall-cmd --get-active-zones

firewall-cmd --zone=public --list-all

firewall-cmd --zone=internal --permanent --remove-service=mdns
firewall-cmd --zone=internal --permanent --remove-service=samba-client
firewall-cmd --zone=internal --permanent --remove-service=dhcpv6-client
firewall-cmd --reload

firewall-cmd --zone=internal --list-all

(Optional) you can remove ssh access from the public IP address if you can access it from the private network

firewall-cmd --zone=public --permanent --remove-service=ssh
firewall-cmd --reload

Configure the firewall for Docker Swarm

On the public zone we allow http/https by default but you can add your services ports here

firewall-cmd --zone=public --permanent --add-service=http
firewall-cmd --zone=public --permanent --add-service=https

firewall-cmd --reload

firewall-cmd --zone=public --list-all

On the private zone we allow ports for Swarm's management

firewall-cmd --zone=internal --permanent --add-port=2376/tcp
firewall-cmd --zone=internal --permanent --add-port=2377/tcp
firewall-cmd --zone=internal --permanent --add-port=7946/tcp
firewall-cmd --zone=internal --permanent --add-port=7946/udp
firewall-cmd --zone=internal --permanent --add-port=4789/udp

firewall-cmd --reload

firewall-cmd --zone=internal --list-all

Install Docker CE

On each node as root

yum install -y yum-utils

yum-config-manager \
--add-repo \
https://download.docker.com/linux/centos/docker-ce.repo

yum install -y docker-ce

systemctl enable docker

systemctl start docker

On the Manager node as root

Install Swarm mode

docker swarm init --advertise-addr 192.168.0.10

(For reminder) You can use these command on the manager to retrieve the tokens for adding another manager or worker node

docker swarm join-token manager
docker swarm join-token worker

On the Worker node as root

Join the Swarm pool

docker swarm join --token <token> 192.168.0.10:2377

On the manager node as root

See the status of your pool

docker node ls

show the swarm pool

Install Rex-Ray for persistent volume with OpenStack Cinder

On each node as root

Here you need to change some information:

  • Openstack Username
  • Openstack Password
  • Openstack Tenant
  • Openstack Tenant Name
  • Openstack Region

You can find these information inside the OpenRC file. You can follow this guide to download the OpenRC file: https://docs.ovh.com/fr/public-cloud/charger-les-variables-denvironnement-openstack/

docker plugin install --grant-all-permissions rexray/cinder:edge \
CINDER_AUTHURL=https://auth.cloud.ovh.net/v2.0/ \
CINDER_USERNAME=<Openstack Username> \
CINDER_PASSWORD=<Openstack Password> \
CINDER_TENANTID=<Openstack Tenant> \
CINDER_TENANTNAME=<Openstack Tenant Name> \
CINDER_REGIONNAME=<Openstack Region e.i: GRA3> \
CINDER_AVAILABILITYZONENAME=nova \
REXRAY_FSTYPE=ext4 \
REXRAY_PREEMPT=true

Test your infra

Create the persistent volume

docker volume create --name docker-swarm-prod-pg_data --driver rexray/cinder:edge --opt size=5

You can see the volume on each swarm node

docker volume ls

docker volume ls

Create a test container with the persistent volume attached

docker service create --replicas 1 --name pg -e POSTGRES_PASSWORD=mysecretpassword --mount type=volume,source=docker-swarm-prod-pg_data,target=/var/lib/postgresql/data,volume-driver=rexray/cinder:edge --constraint 'node.role==worker' postgres

This container will start on the worker node (due to the constraint)

start instance

On the worker, you can see the container with this command

docker ps

Also in the OVH interface (the 5go attached hard drive)

persistent volume attached to the worker

On the worker node, connect to the container and create a new database

create database

docker exec -ti <name> sh

su - postgres

psql

CREATE DATABASE testvolumepersistance;

\l

\q

exit

exit

Simulate worker failed

On the manager, we remove the placement constraint of the container (because we only have 1 worker)

docker service update --constraint-rm node.role==worker pg

On the worker, we stop docker

systemctl stop docker

On the manager we can see the status of the pool

docker node ls

worker is down

We can also see the status of the service

docker service ps pg

docker service status

Our service has been restart to the manager node.

We can see in the OVH Interface, our volume has been attached to the manager

volume attached to manager

What about the data... Connect to the container on the manager node

persistent data works

docker ps

docker exec -ti <name> sh

su - postgres

psql

\l

\q

exit

exit

As we can see, our test database is here!

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published