-
Notifications
You must be signed in to change notification settings - Fork 0
Getting Started Guide
Welcome to the clustering-with-docker wiki!
-
Set up Debian server VMs in Virtualbox.
Steps to be followed
- Set up the Docker Repository
Update the apt package index.
$ sudo apt-get update
Install packages to allow apt to use a repository over HTTPS:
$ sudo apt-get install apt-transport-https ca-certificates curl gnupg2 software-properties-common
Add Docker’s official GPG key:
$ curl -fsSL https://download.docker.com/linux/debian/gpg | sudo apt-key add –
command to set up the stable repository:
$ sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/debian $(lsb_release -cs) stable"
- Install Docker Engine-Community
Update the apt package index.
$ sudo apt-get update
Install the latest version of Docker Engine - Community and containerd.
$ sudo apt-get install docker-ce docker-ce-cli containerd.io
Verify that Docker Engine - Community is installed correctly by running the hello-world image.
$ sudo docker run hello-world
To use Docker as a non-root user:
$ sudo usermod -aG docker username
Docker-compose is a tool for defining and running multi-container docker applications using '.yml' file on a single docker host.
Steps to be followed:
Run this command to download the current stable release of Docker Compose:
$ sudo curl -L "https://github.com/docker/compose/releases/download/1.25.0/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
Allow executable permissions to the binary:
$ sudo chmod +x /usr/local/bin/docker-compose
Test the installation.
$ docker-compose --version
Deploying a Docker Swarm cluster
-
Docker Swarm is a native clustering system for Docker. It manages multiple Docker hosts participating in the cluster behave as a single virtual host. It consists of multiple Docker hosts that run in swarm mode, acting as either manager or workers or performing both roles. In Docker 1.12 and higher, Swarm mode is integrated with Docker Engine. For swarm cluster creation, the following ports must be available.
- TCP port 2377 for cluster management communications
- TCP and UDP port 7946 for communication among nodes
- UDP port 4789 for overlay network traffic
How to initialize a Docker Swarm
Open a terminal and run below command in the machine which is required to run as manager i.e. Manager node initializes a swarm cluster.
$ docker swarm init --advertise-addr 192.xxx.xxx.xxx:2377
Swarm initialized: current node (ljzebxs4pqufumo7i417t4fg3) is now a manager.
To add a worker to this swarm, run the following command:
$docker swarm join --token "SWMTKN-......................................18x7x3a" 192.xxx.xxx.xxx:2377
The “--advertise-addr” flag configures the manager node to publish its address as 192.xxx.xxx.xxx. The other nodes in the swarm must be able to access the manager at the IP address.
Follow the below instructions and run the command on a Docker host to create a worker node.
$ docker swarm join --token "SWMTKN-......................................18x7x3a" 192.xxx.xxx.xxx:2377
Use the below command to view information about swarm nodes:
$ docker node ls
A node can leave the swarm or can be removed only by a manager node.
$ docker swarm leave (worker node leaving swarm; for manager node, use '-f' option)
$ docker node rm node-id (manager removing a worker node)
Docker compose file
Docker compose file uses either .yml or .yaml extension.The Compose file is used for defining services, networks, and volumes for a Docker application.
A snippet of Docker compose file used for this project
Versioning:
Version 3.x is the latest and recommended version for the compose file,
designed to be cross-compatible between Compose and Docker's Swarm mode. It is
specified at the root of the YAML.
image:
image tell the compose file what the application has as constituent services
and which images to be pulled. Here, Docker images related to Nginx, nodejs, and mongo are pulled from the GitLab container registry and containers are started running from them.
ports:
ports are used to specify container port or both container and host ports
(HOST: CONTAINER) to be exposed.
deploy:
It is used to specify the configuration related to the deployment and running of services. Though it is ignored by "docker-compose up", it takes effect when deploying to a swarm with "docker stack deploy".
replicas:
The services are by default in replicated mode. 'replicas' are to specify the number of containers that should be running at any given time. In this application, three replicas of each service are created across the swarm cluster.
Dockerfile
Dockerfile is a text document that contains all the commands to be executed to create an automated image build. Docker reads the instructions from Dockerfile and builds images automatically depending on it.
FROM:
A valid Dockerfile must start with a "FROM" instruction. The "FROM" instruction initiates a new build stage and sets the Base Image for subsequent instructions in it.
CMD:
The main purpose of a "CMD" is to provide defaults for an executing container.
It sets the command to be executed when running the image. In this Dockerfile,
WORKDIR:
This instruction sets the working directory for any commands like RUN, CMD, COPY that may follow it in the Dockerfile.
COPY:
The "COPY" instruction copies new files or directories from the host source and adds them to the filesystem of the container at the destination path. Here, the COPY instruction copies files from the current source directory from the host to the working directory /home/node in the container.
RUN:
"RUN" instruction is used to execute commands in a new layer on the top of the current image and commit the results. The resulting image is used for the next step in Dockerfile. Here, with 'RUN npm install', all the dependencies defined in the package.json file will be installed and creates a new layer on the top of the existing alpine node image.
build:
The docker build command builds an image from a Dockerfile and its build context. The context is a set of files at a specified location PATH on the local filesystem or URL. The build is run by the Docker daemon, not by the CLI. The first thing a build process does is to send the entire context to the Docker daemon.
As the GitLab's container registry is used here for storing images, it has to be logged on by using GitLab's username and password.
docker login registry-URL
Then, the images are specified with the repository along with a tag at which the new image will be pushed.
docker build -t registry-url/user-id/clustering-with-docker:nodejs .
docker push registry-url/user-id/clustering-with-docker:nodejs
The new nodejs image is built from Dockerfile with new specified layers with the project's registry tag and pushed to the GitLab registry by using the "docker push" command so that it can be pulled later on for the deployment. The Nginx and mongo container images are pulled from the docker hub and similarly pushed to GitLab's registry after appropriate tagging as below.
docker tag nginx registry-url/user-id/clustering-mit-docker:nginx
docker tag mongo registry-url/user-id/clustering-mit-docker:mongo
docker push registry-url/user-id/clustering-with-docker:nginx
docker push registry-url/user-id/clustering-with-docker:mongo
Deploying web-service stack to the Docker swarm
The 'web-service' stack defined in the Compose file is deployed and managed on a docker swarm by using following commands.
docker stack deploy --with-registry-auth -c docker-compose.yml webservice
The above command pulls the images to be used as per Compose file in creating the application from the GitLab's registry as indicated with "--with-registry-auth" and deploy the webservice over the docker swarm nodes with the specified configuration like ports and the number of replicas.
docker stack services webservice
This command shows the list of services running along with their replicas.
docker stack ps webservice
The "docker stack ps webservice" command shows the list of tasks running of each swarm node.
docker stack rm webservice
This brings the stack down, stop all services and removes the default overlay network created while bringing up the stack.
Load balancing
Swarm mode has an embedded DNS component that automatically assigns a DNS entry to each service in the swarm. The manager uses internal load balancing to distribute requests among services within the cluster based on the DNS name of the service. The swarm load balancer will use DNS round-robin by default to expose the service on the nodes.
After reloading the web request each time, it can be found the incoming service requests are routed to different container services and thereby, distributing the requests among services within the cluster.
References