If you are working on a microservice architecture, where you need to work on different project components on different machines and create a master slave architecture where the master nodes control the slave nodes, deploying your project through Docker Swarm might save you a lot of time, effort and resources.
Docker Swarm is basically a cluster of physical or virtual machines called nodes which run docker containers separately and you can configure all these nodes to join a cluster managed by the master node called the swarm manager. It is an orchestration tool which allows you to manage multiple Docker Containers deployed on different machines. This type of architecture helps you to manage your resources properly and work efficiently. It helps in automatic load balancing while allowing you to leverage the power of Docker Containers and guarantees high service availability.
In general, there are two service modes available for any Docker Swarm. One is Replicated Service mode which allows you to specify the number of replicable tasks to the manager which assigns them to all the available nodes. The other one is the Global Service mode which allocates a sequence of tasks to different nodes based on their availability, ability and requirements.
In this article, we are going to discuss some of the most basic and important Docker Swarm Commands that will help you to kickstart your Swarm project.
Create 6 Docker Machines with the hyperv driver with one of them working as the Swarm manager while the other 5 working as the worker nodes.
sudo docker-machine create −−driver hyperv manager sudo docker−machine create −−driver hyperv worker1 sudo docker−machine create −−driver hyperv worker2 sudo docker−machine create −−driver hyperv worker3 sudo docker−machine create −−driver hyperv worker4 sudo docker−machine create −−driver hyperv worker5
Use the ls command to confirm whether the machines have been created or not.
sudo docker−machine ls
Copy the manager’s IP address.
sudo docker−machine ip manager
SSH into the manager node.
sudo docker−machine ssh manager
Now, you are inside the manager prompt. To initialize the swarm, perform these steps.
docker swarm init −−advertise−addr <manager−ip>
Check the Docker Swarm status inside the manager node using the following command.
docker node ls
It displays that currently there is only one leader node called manager.
Inside the SSH session of manager node, to find out the command and token to join as a Worker or Manager node, you can use these commands.
docker swarm join−token worker docker swarm join−token manager
The above commands outputs the specific commands that you would require to either join the cluster as a worker or a manager.
We will now see how to add worker nodes to the cluster under manager.
While keeping the manager SSH session open, fire up another terminal and start the worker1 SSH session using the following command.
sudo docker−machine ssh worker1
Once you are inside the SSH session of worker1, copy the command that was generated for joining in as a worker from the manager terminal and paste it inside the SSH session of the worker1. After successful execution, you will find the message “This node joined the swarm as a worker” being displayed. Do the same thing for the other 4 workers as well. After you have created the cluster with 1 manager and 5 workers, you can confirm the same by typing the following command inside the SSH session of the manager
docker node ls
After creating the Swarm cluster, you are now ready to launch a service. We only need to tell the manager node that we are going to launch a service (running containers) and the manager automatically assigns the distribution, execution of commands and scheduling of the containers. In this example, we will launch 4 replicas of the nginx container and expose it to port 80.
Inside the manager SSH session, execute the following command.
docker service create −−replicas 5 −p 80:80 −−name web nginx
The orchestration layer is now working. After waiting for some time, you can execute this command inside the manager SSH session to confirm the same.
docker service ps web
To access the service, you can execute the worker or the manager ip inside the browser of any of the worker or manager nodes no matter if it has a container running or not.
Currently, you have 5 containers of nginx running in your swarm cluster. To scale up to 7 containers, use this command inside the manager SSH session.
docker service scale web=7
Confirm the same using this command.
docker service ps web
To conclude, in this article we discussed how to create and deploy a Docker Swarm Cluster by creating different virtual machines and assigning manager and worker roles to the nodes. We also discussed how to create and launch an nginx service and scaled it up and tried to access it using any of the nodes. If you are a Docker developer or you are using Docker in your microservice project, it is sort of mandatory that you have a good grasp on Swarm clusters in order to scale up your project and perform efficient utilization of resources. Creating, launching, deploying and maintaining Docker Swarm cluster nodes is very essential in order to contribute to successfully maintaining a large project and distributed project.