Docker Engine includes swarm mode for natively managing a cluster of Docker Engines. The Docker CLI can be used to create a swarm, deploy application services to a swarm, and manage swarm behavior. Complete details are available at: https://docs.docker.com/engine/swarm/. It’s important to understand the Swarm mode key concepts before starting with this chapter.
Swarm is typically a cluster of multiple Docker Engines. But for simplicity we’ll run a single node Swarm.
Initialize Swarm mode using the following command:
docker swarm init
This starts a Swarm Manager. By default, a manager node is also a worker but can be configured to be a manager-only.
Find some information about this one-node cluster using the command docker info
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 17
Server Version: 17.09.0-ce
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: active
NodeID: p9a1tqcjh58ro9ucgtqxa2wgq
Is Manager: true
ClusterID: r3xdxly8zv82e4kg38krd0vog
Managers: 1
Nodes: 1
Orchestration:
Task History Retention Limit: 5
Raft:
Snapshot Interval: 10000
Number of Old Snapshots to Retain: 0
Heartbeat Tick: 1
Election Tick: 3
Dispatcher:
Heartbeat Period: 5 seconds
CA Configuration:
Expiry Duration: 3 months
Force Rotate: 0
Autolock Managers: false
Root Rotation In Progress: false
Node Address: 192.168.65.2
Manager Addresses:
192.168.65.2:2377
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 06b9cb35161009dcb7123345749fef02f7cea8e0
runc version: 3f2f8b84a77f73d38244dd690525642a72156c64
init version: 949e6fa
Security Options:
seccomp
Profile: default
Kernel Version: 4.9.49-moby
Operating System: Alpine Linux v3.5
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 1.952GiB
Name: moby
ID: TJSZ:O44Y:PWGZ:ZWZM:SA73:2UHI:VVKV:KLAH:5NPO:AXQZ:XWZD:6IRJ
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): true
File Descriptors: 35
Goroutines: 142
System Time: 2017-10-05T20:57:14.037442426Z
EventsListeners: 1
Registry: https://index.docker.io/v1/
Experimental: true
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
This cluster has 1 node, and that is a manager. By default, the manager node is also a worker node. This means the containers can run on this node.
This section describes how to deploy a multi-container application using Docker Compose to Swarm mode use Docker CLI.
Create a new directory and cd
to it:
mkdir webapp cd webapp
Create a new Compose definition docker-compose.yml
using this configuration file.
This application can be started as:
docker stack deploy --compose-file=docker-compose.yml webapp
This shows the output as:
Ignoring deprecated options:
container_name: Setting the container name is not supported.
Creating network webapp_default
Creating service webapp_db
Creating service webapp_web
WildFly Swarm and MySQL services are started on this node. Each service has a single container. If the Swarm mode is enabled on multiple nodes then the containers will be distributed across them.
A new overlay network is created. This can be verified using the command docker network ls
. This network allows multiple containers on different hosts to communicate with each other.
Verify that the WildFly and MySQL services are running using docker service ls
:
ID NAME MODE REPLICAS IMAGE PORTS
j21lwelj529f webapp_db replicated 1/1 mysql:8 *:3306->3306/tcp
m0m44axt35cg webapp_web replicated 1/1 arungupta/docker-javaee:dockerconeu17 *:8080->8080/tcp,*:9990->9990/tcp
More details about the service can be obtained using docker service inspect webapp_web
:
[
{
"ID": "m0m44axt35cgjetcjwzls7u9r",
"Version": {
"Index": 22
},
"CreatedAt": "2017-10-07T00:17:44.038961419Z",
"UpdatedAt": "2017-10-07T00:17:44.040746062Z",
"Spec": {
"Name": "webapp_web",
"Labels": {
"com.docker.stack.image": "arungupta/docker-javaee:dockerconeu17",
"com.docker.stack.namespace": "webapp"
},
"TaskTemplate": {
"ContainerSpec": {
"Image": "arungupta/docker-javaee:dockerconeu17@sha256:6a403c35d2ab4442f029849207068eadd8180c67e2166478bc3294adbf578251",
"Labels": {
"com.docker.stack.namespace": "webapp"
},
"Privileges": {
"CredentialSpec": null,
"SELinuxContext": null
},
"StopGracePeriod": 10000000000,
"DNSConfig": {}
},
"Resources": {},
"RestartPolicy": {
"Condition": "any",
"Delay": 5000000000,
"MaxAttempts": 0
},
"Placement": {
"Platforms": [
{
"Architecture": "amd64",
"OS": "linux"
}
]
},
"Networks": [
{
"Target": "bwnp1nvkkga68dirhp1ue7qey",
"Aliases": [
"web"
]
}
],
"ForceUpdate": 0,
"Runtime": "container"
},
"Mode": {
"Replicated": {
"Replicas": 1
}
},
"UpdateConfig": {
"Parallelism": 1,
"FailureAction": "pause",
"Monitor": 5000000000,
"MaxFailureRatio": 0,
"Order": "stop-first"
},
"RollbackConfig": {
"Parallelism": 1,
"FailureAction": "pause",
"Monitor": 5000000000,
"MaxFailureRatio": 0,
"Order": "stop-first"
},
"EndpointSpec": {
"Mode": "vip",
"Ports": [
{
"Protocol": "tcp",
"TargetPort": 8080,
"PublishedPort": 8080,
"PublishMode": "ingress"
},
{
"Protocol": "tcp",
"TargetPort": 9990,
"PublishedPort": 9990,
"PublishMode": "ingress"
}
]
}
},
"Endpoint": {
"Spec": {
"Mode": "vip",
"Ports": [
{
"Protocol": "tcp",
"TargetPort": 8080,
"PublishedPort": 8080,
"PublishMode": "ingress"
},
{
"Protocol": "tcp",
"TargetPort": 9990,
"PublishedPort": 9990,
"PublishMode": "ingress"
}
]
},
"Ports": [
{
"Protocol": "tcp",
"TargetPort": 8080,
"PublishedPort": 8080,
"PublishMode": "ingress"
},
{
"Protocol": "tcp",
"TargetPort": 9990,
"PublishedPort": 9990,
"PublishMode": "ingress"
}
],
"VirtualIPs": [
{
"NetworkID": "vysfza7wgjepdlutuwuigbws1",
"Addr": "10.255.0.5/16"
},
{
"NetworkID": "bwnp1nvkkga68dirhp1ue7qey",
"Addr": "10.0.0.4/24"
}
]
}
}
]
Logs for the service can be seen using docker service logs -f webapp_web
:
webapp_web.1.lf3y5k7pkpt9@moby | 00:17:47,296 INFO [org.jboss.msc] (main) JBoss MSC version 1.2.6.Final
webapp_web.1.lf3y5k7pkpt9@moby | 00:17:47,404 INFO [org.jboss.as] (MSC service thread 1-8) WFLYSRV0049: WildFly Core 2.0.10.Final "Kenny" starting
webapp_web.1.lf3y5k7pkpt9@moby | 2017-10-07 00:17:48,636 INFO [org.wildfly.extension.io] (ServerService Thread Pool -- 20) WFLYIO001: Worker 'default' has auto-configured to 8 core threads with 64 task threads based on your 4 available processors
. . .
webapp_web.1.lf3y5k7pkpt9@moby | 2017-10-07 00:17:56,619 INFO [org.jboss.resteasy.resteasy_jaxrs.i18n] (ServerService Thread Pool -- 12) RESTEASY002225: Deploying javax.ws.rs.core.Application: class org.javaee.samples.employees.MyApplication
webapp_web.1.lf3y5k7pkpt9@moby | 2017-10-07 00:17:56,621 WARN [org.jboss.as.weld] (ServerService Thread Pool -- 12) WFLYWELD0052: Using deployment classloader to load proxy classes for module com.fasterxml.jackson.jaxrs.jackson-jaxrs-json-provider:main. Package-private access will not work. To fix this the module should declare dependencies on [org.jboss.weld.core, org.jboss.weld.spi]
webapp_web.1.lf3y5k7pkpt9@moby | 2017-10-07 00:17:56,682 INFO [org.wildfly.extension.undertow] (ServerService Thread Pool -- 12) WFLYUT0021: Registered web context: /
webapp_web.1.lf3y5k7pkpt9@moby | 2017-10-07 00:17:57,094 INFO [org.jboss.as.server] (main) WFLYSRV0010: Deployed "docker-javaee.war" (runtime-name : "docker-javaee.war")
Make sure to wait for the last log statement to show.
Now that the WildFly and MySQL servers have been configured, let’s access the application. You need to specify IP address of the host where WildFly is running (localhost
in our case).
The endpoint can be accessed in this case as:
curl -v http://localhost:8080/resources/employees
The output is shown as:
* Trying ::1...
* TCP_NODELAY set
* Connected to localhost (::1) port 8080 (#0)
> GET /resources/employees HTTP/1.1
> Host: localhost:8080
> User-Agent: curl/7.51.0
> Accept: */*
>
< HTTP/1.1 200 OK
< Connection: keep-alive
< Content-Type: application/xml
< Content-Length: 478
< Date: Sat, 07 Oct 2017 00:22:59 GMT
<
* Curl_http_done: called premature == 0
* Connection #0 to host localhost left intact
<?xml version="1.0" encoding="UTF-8" standalone="yes"?><collection><employee><id>1</id><name>Penny</name></employee><employee><id>2</id><name>Sheldon</name></employee><employee><id>3</id><name>Amy</name></employee><employee><id>4</id><name>Leonard</name></employee><employee><id>5</id><name>Bernadette</name></employee><employee><id>6</id><name>Raj</name></employee><employee><id>7</id><name>Howard</name></employee><employee><id>8</id><name>Priya</name></employee></collection>
This shows all employees stored in the database.
If you only want to stop the application temporarily while keeping any networks that were created as part of this application, the recommended way is to set the amount of service replicas to 0.
docker service scale webapp_db=0 webapp_web=0
It shows the output:
webapp_db scaled to 0
webapp_web scaled to 0
Since --detach=false was not specified, tasks will be scaled in the background.
In a future release, --detach=false will become the default.
This is especially useful if the stack contains volumes and you want to keep the data. It allows you to simply start the stack again with setting the replicas to a number higher than 0.
Shutdown the application using docker stack rm webapp
:
Removing service webapp_db
Removing service webapp_web
Removing network webapp_default
This stops the container in each service and removes the services. It also deletes any networks that were created as part of this application.