The new Docker CE makes it easy to deploy and work with Docker Swarm.
I will describe how to setup a test 4 node Docker Swarm that will be further used to host a test enterprise application (a post is going to follow with that).
STEP 1: First some practical considerations
A Docker Swarm is as the definition is stating: “a clustering and scheduling tool for Docker containers”.
This means that if we start a container as a service on the Docker Swarm, that container will freely float (free in the bound of the administrator defined constraints) on the nodes that constitute the Docker Swarm. This offers us out of the box high availability, automatic disaster recovery and optimized resource usage. Everything you want in case of mission critical enterprise systems.
Docker CE Engine introduced a new swarm mode for natively managing a Docker Swarm.
The new Docker swarm mode implements Raft Consensus Algorithm and does not require using external key value store any-more, such as ZooKeeper, doozerd, and etcd. This is a big thing as eliminates the need to configure and support yet another service just to be able to manage the cluster (swarm).
Important for the consensus engine. You should always have an odd number of manager nodes (so voting works) and more than one. So looks like in a truly fault tolerant enterprise system setup we need at least 3 manager nodes.
STEP 2: Initialize the swarm
Make sure all the nodes that will be added to the swarm have Docker CE in experimental mode deployed on them. See post #Docker: Install the new Docker CE in experimental mode on Fedora Linux
We have to appoint first a machine where the leader docker manager node is going to run and use it to initialize the swarm. Note that a manager node will use some extra resources (CPU and memory) compared to a worker node, so be careful when designating it.
I choose my nas2.voina.org server as a leader docker manager node.
[root@nas2 ~]# docker swarm init --advertise-addr 192.168.2.22
Swarm initialized: current node (kvanolk8l14nitzm9z2w5wg6i) is now a manager.
To add a worker to this swarm, run the following command:
...
You must use a real IP of the server not the server name as the advertised address.
The swarm is created and a random ID is assigned to our leader docker manager node.
The swarm will create a random swarm join-token that will be used as a key by new nodes to be able to connect to the swarm.
STEP 3: Open ports
The following ports must be opened between the nodes of the swarm,
– 2377/TCP for cluster management communications
– 7946/TCP and 7946/UDP for communication among nodes
– 4789/UDP for overlay network traffic
– 50/ESP in case overlay network with encryption (–opt encrypted) is used
So on RedHat/CentOS/Fedora that uses firewalld do the following.
Find the active zones.
# firewall-cmd --get-active-zones
public
interfaces: enp1s0
Then open the ports and esp protocol in the active zone.
# firewall-cmd --zone=public --add-port=2377/tcp --permanent
# firewall-cmd --zone=public --add-port=7946/tcp --permanent
# firewall-cmd --zone=public --add-port=7946/udp --permanent
# firewall-cmd --zone=public --add-port=4789/udp --permanent
# firewall-cmd --zone=public --add-rich-rule="rule protocol value=esp accept" --permanent
# firewall-cmd --reload
Find all the configurations associated with the active zone.
# firewall-cmd --zone=public --list-all
public (active)
target: default
icmp-block-inversion: no
interfaces: enp1s0
sources:
services: mdns dhcpv6-client ssh
ports: 2377/tcp 7946/tcp 7946/udp 4789/udp
protocols:
masquerade: no
forward-ports:
source-ports:
icmp-blocks:
rich rules:
rule protocol value="esp" accept
STEP 4: Add worker nodes
Let’s add now 3 worker nodes (nas1, nas3, nas4) to the swarm.
Execute on each of designated worker nodes:
[root@nas1 storage]# docker swarm join \
> --token SWMTKN-1-5qwfhoezr4eq7edg7fg2gvtv5jl5m66br4vnugl63u9b9kk113-79168ao7nlq8dsn5kqzfxx9eh \
> 192.168.2.22:2377
Just to see how easy is to setup a swarm across diverse servers note that nas1,nas2 and nas4 are at my primary location and nas3 is at my secondary location at 100km from first site. The sites are connected by a site-to-site openVPN. See EdgeRouter: OpenVPN site-to-site VPN
After the above command was executed on all nodes we can check the status of the swarm.
Execute on the leader manager node the following:
[root@nas2 docker]# docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
ajxhwzyrr89f7iq603g5gcd8e nas3.voina.org Ready Active
kvanolk8l14nitzm9z2w5wg6i * nas2.voina.org Ready Active Leader
vn7dz6fn4vf6pwpl5uwmvo4q7 nas4.voina.org Ready Active
y5ylewj3b2ycn5xl436df7dla nas1.voina.org Ready Active
STEP 5: Adding extra manager nodes
To add a manager to this swarm, run :
# docker swarm join-token manager
STEP 6: Register a free Docker Cloud account
Go to Docker Cloud and register a new account. You will see the new Docker Cloud GUI that looks much more simple than several months ago.
You can create a free docker repository , organizations and you can create swarms (using a cloud provider) or import existing swarms.
STEP 7: Register the new created swarm to Docker Cloud
To register an existing swarm go to “Swarms” menu. Click on “Bring your own swarm”.
As instructed more ports must be opened in the firewall of the manager node.
You need to open incoming port 2376 in your firewall for Docker Cloud to communicate with the Docker daemon running in your manager node.
On the manager node (nas2) run the Docker Cloud registration container. Follow the instructions and login with your Docker Cloud credentials then assign a name to the swarm (voina/swarm).
[root@nas2 /]# docker run -ti --rm -v /var/run/docker.sock:/var/run/docker.sock dockercloud/registration
Use your Docker ID credentials to authenticate:
Username: my_email
Password:
Available namespaces:
* voina
Enter name for the new cluster [my_mail/s0yrv7re0t291h7q36go47un2]: voina/swarm
You can now access this cluster using the following command in any Docker Engine:
docker run --rm -ti -v /var/run/docker.sock:/var/run/docker.sock -e DOCKER_HOST dockercloud/client voina/swarm
After this you can go back in the Docker Cloud GUI and at the menu “Swarms” the new attached swarm will appear with status DEPLOYED.
At this point you cannot do a lot (Swarms menu is in beta) but we have a promise that more operations and monitoring on the registered swarms will be possible in later updates.
STEP 8: Connect to the docker swarm
To connect to the swarm execute.
docker run --rm -ti -v /var/run/docker.sock:/var/run/docker.sock -e DOCKER_HOST dockercloud/client voina/swarm
STEP 9: Managing nodes of the swarm
Take a node out of the swarm.
On the swarm node execute:
# docker swarm leave
On the manager execute:
# docker node rm w3eq0hhcf7plx6hr3uhuxymcx
where w3eq0hhcf7plx6hr3uhuxymcx is the ID of the node to be removed.
The order is very important if you try to remove a node on the manager before the node itself left the swarm the following error occurs:
[root@nas2 docker]# docker node rm w3eq0hhcf7plx6hr3uhuxymcx
Error response from daemon: rpc error: code = 9 desc = node w3eq0hhcf7plx6hr3uhuxymcx is not down and can't be removed
STEP 10: Test the swarm
To test the swarm I deployed on the swarm a nice and simple application: Docker Swarm Visualizer.
On the manager node execute the following to create a Docker Swarm Visualizer on the swarm:
[root@nas2 docker]# docker service create --name=viz --publish=8080:8080/tcp --constraint=node.role==manager --mount=type=bind,src=/var/run/docker.sock,dst=/var/run/docker.sock manomarks/visualizer
This will deploy the swarm visualizer that can be access on the manager node at http://192.168.2.22:8080.
In the next post I will use the features of the new docker compose to deploy directly from an yml file a full environment on the swarm.
[paypal_donation_button]