Note: This article was adapted from content originally written on October 19th, 2017, titled “Setting up a Private CI/CD Solution in Azure.” It has been simplified and split into four parts for easier reading.
Part 3: Docker Swarm and Core Services Configuration
Key Takeaways
- This part covers configuring Docker Swarm and setting up GitLab for version control within a CI/CD pipeline.
- Docker Swarm initializes a cluster with manager and worker nodes, creating a custom overlay network for service communication.
- GitLab is deployed as a Docker Swarm service, involving persistent storage preparation and post-installation configuration.
- A private Docker Registry is established for storing container images, requiring SSL certificate generation and client configuration.
- The article concludes by verifying service health and setting the stage for the next part on Jenkins configuration.
In this part, we’ll configure Docker Swarm to orchestrate our services, set up GitLab for version control and collaboration, and establish a private Docker Registry for container image management.
Docker Swarm Setup
Docker Swarm provides native clustering and orchestration capabilities for Docker. We’ll configure a highly available Swarm cluster with three manager nodes and two worker nodes.
Initialize the Swarm Cluster
Start by initializing the Swarm on the first manager node:
|
1 2 3 4 5 6 7 8 9 |
# SSH into VM-001 (first manager) ssh spacely-eng-admin@10.0.0.4 # Initialize Docker Swarm sudo docker swarm init --advertise-addr 10.0.0.4 # This will output a join token - save it! # Example output: # docker swarm join --token SWMTKN-1-xxx... 10.0.0.4:2377 |
The initialization command will provide two important pieces of information:
- A manager join token (for adding manager nodes)
- A worker join token (for adding worker nodes)
docker swarm join-token manager or docker swarm join-token worker
Join Additional Manager Nodes
For high availability, add the remaining manager nodes:
|
1 2 3 4 5 6 7 8 9 10 11 12 |
# SSH into VM-002 ssh spacely-eng-admin@10.0.0.5 # Get the manager token from VM-001 if needed ssh spacely-eng-admin@10.0.0.4 "sudo docker swarm join-token manager" # Join as manager sudo docker swarm join --token SWMTKN-1-xxx... 10.0.0.4:2377 # Repeat for VM-003 ssh spacely-eng-admin@10.0.0.6 sudo docker swarm join --token SWMTKN-1-xxx... 10.0.0.4:2377 |
Join Worker Nodes
|
1 2 3 4 5 6 7 8 9 |
# SSH into VM-004 (first worker) ssh spacely-eng-admin@10.0.0.7 # Join as worker sudo docker swarm join --token SWMTKN-1-yyy... 10.0.0.4:2377 # Repeat for VM-005 ssh spacely-eng-admin@10.0.1.4 sudo docker swarm join --token SWMTKN-1-yyy... 10.0.0.4:2377 |
Verify Swarm Status
Check that all nodes have joined successfully:
|
1 2 3 4 5 6 7 8 9 10 |
# From any manager node sudo docker node ls # Expected output: # ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS # abc123 * spacely-eng-vm-001 Ready Active Leader # def456 spacely-eng-vm-002 Ready Active Reachable # ghi789 spacely-eng-vm-003 Ready Active Reachable # jkl012 spacely-eng-vm-004 Ready Active # mno345 spacely-eng-vm-005 Ready Active |
Create Overlay Network
Create a custom overlay network for service communication:
|
1 2 3 4 5 6 |
# Create overlay network sudo docker network create \ --driver overlay \ --subnet=172.16.255.0/24 \ --attachable \ spacely-engineering-network |
Label Nodes for Service Placement
Apply labels to control where services are deployed:
|
1 2 3 4 5 6 7 8 |
# Label manager nodes sudo docker node update --label-add role=manager spacely-eng-vm-001 sudo docker node update --label-add role=manager spacely-eng-vm-002 sudo docker node update --label-add role=manager spacely-eng-vm-003 # Label worker nodes for Jenkins builds sudo docker node update --label-add role=jenkins-worker spacely-eng-vm-004 sudo docker node update --label-add role=jenkins-worker spacely-eng-vm-005 |
GitLab Configuration
GitLab will serve as our version control system and collaboration platform. We’ll deploy it as a Docker Swarm service for high availability.
Prepare GitLab Directories
First, create persistent storage directories on all manager nodes:
|
1 2 3 4 5 6 7 8 9 |
# Run on all manager nodes for node in 10.0.0.4 10.0.0.5 10.0.0.6; do ssh spacely-eng-admin@$node << 'EOF' sudo mkdir -p /srv/gitlab/config sudo mkdir -p /srv/gitlab/logs sudo mkdir -p /srv/gitlab/data sudo chmod -R 755 /srv/gitlab EOF done |
Deploy GitLab Service
Create a Docker Compose file for GitLab deployment:
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 |
# Create gitlab-stack.yml cat << 'EOF' > gitlab-stack.yml version: '3.7' services: gitlab: image: gitlab/gitlab-ce:latest hostname: 'gitlab.example.com' environment: GITLAB_OMNIBUS_CONFIG: | external_url 'http://gitlab.example.com' gitlab_rails['gitlab_shell_ssh_port'] = 2222 nginx['listen_port'] = 10080 nginx['listen_https'] = false gitlab_rails['initial_root_password'] = 'ComplexPassword123!' gitlab_rails['gitlab_signin_enabled'] = true gitlab_rails['time_zone'] = 'America/New_York' postgresql['shared_buffers'] = "256MB" unicorn['worker_processes'] = 4 unicorn['worker_timeout'] = 60 ports: - target: 10080 published: 10080 protocol: tcp mode: host - target: 22 published: 2222 protocol: tcp mode: host volumes: - /srv/gitlab/config:/etc/gitlab - /srv/gitlab/logs:/var/log/gitlab - /srv/gitlab/data:/var/opt/gitlab networks: - spacely-engineering-network deploy: mode: replicated replicas: 1 placement: constraints: - node.labels.role == manager restart_policy: condition: on-failure delay: 5s max_attempts: 3 networks: spacely-engineering-network: external: true EOF # Deploy the stack sudo docker stack deploy -c gitlab-stack.yml gitlab |
sudo docker service logs gitlab_gitlab -fConfigure GitLab Post-Installation
Once GitLab is running, perform the initial configuration:
- Access GitLab at
http://gitlab.example.com(through VPN) - Login with:
- Username:
root - Password:
ComplexPassword123!(change immediately)
- Username:
- Navigate to Admin Area → Settings → General
- Configure:
- Account and limit settings
- Sign-up restrictions (disable public sign-ups)
- Project creation limits
- Create user accounts for your team
- Set up groups and projects structure
Set Up SSH Access for Git
Configure SSH keys for Git operations:
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
# Generate SSH key on your local machine ssh-keygen -t ed25519 -C "your.email@example.com" # Copy public key cat ~/.ssh/id_ed25519.pub # Add to GitLab: # 1. Login to GitLab # 2. Go to User Settings → SSH Keys # 3. Paste the public key # 4. Save # Test connection (port 2222 for GitLab SSH) ssh -T git@gitlab.example.com -p 2222 |
Docker Registry Setup
A private Docker Registry is essential for storing and distributing container images within your organization.
Generate SSL Certificates
First, create self-signed certificates for the registry:
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 |
# Create certificate directory on manager nodes for node in 10.0.0.4 10.0.0.5 10.0.0.6; do ssh spacely-eng-admin@$node "sudo mkdir -p /srv/certs" done # Generate certificates on VM-001 ssh spacely-eng-admin@10.0.0.4 << 'EOF' # Generate private key sudo openssl genrsa -out /srv/certs/registry.key 2048 # Generate certificate request sudo openssl req -new -key /srv/certs/registry.key \ -out /srv/certs/registry.csr \ -subj "/C=US/ST=State/L=City/O=Spacely Engineering/CN=docker-registry.example.com" # Generate self-signed certificate sudo openssl x509 -req -days 365 \ -in /srv/certs/registry.csr \ -signkey /srv/certs/registry.key \ -out /srv/certs/registry.crt # Set permissions sudo chmod 644 /srv/certs/registry.crt sudo chmod 600 /srv/certs/registry.key EOF # Copy certificates to other manager nodes for node in 10.0.0.5 10.0.0.6; do ssh spacely-eng-admin@10.0.0.4 "sudo cat /srv/certs/registry.crt" | \ ssh spacely-eng-admin@$node "sudo tee /srv/certs/registry.crt" ssh spacely-eng-admin@10.0.0.4 "sudo cat /srv/certs/registry.key" | \ ssh spacely-eng-admin@$node "sudo tee /srv/certs/registry.key" done |
Deploy Docker Registry
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 |
# Create registry stack file cat << 'EOF' > registry-stack.yml version: '3.7' services: registry: image: registry:2 environment: REGISTRY_HTTP_ADDR: 0.0.0.0:5000 REGISTRY_HTTP_TLS_CERTIFICATE: /certs/registry.crt REGISTRY_HTTP_TLS_KEY: /certs/registry.key REGISTRY_STORAGE_DELETE_ENABLED: 'true' ports: - target: 5000 published: 5000 protocol: tcp mode: host volumes: - /srv/registry:/var/lib/registry - /srv/certs:/certs:ro networks: - spacely-engineering-network deploy: mode: replicated replicas: 1 placement: constraints: - node.labels.role == manager restart_policy: condition: on-failure delay: 5s networks: spacely-engineering-network: external: true EOF # Deploy the registry sudo docker stack deploy -c registry-stack.yml registry |
Configure Docker Clients
Configure all nodes to trust the registry certificate:
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
# Run on all nodes (managers and workers) for node in 10.0.0.4 10.0.0.5 10.0.0.6 10.0.0.7 10.0.1.4; do ssh spacely-eng-admin@$node << 'EOF' # Create Docker cert directory sudo mkdir -p /etc/docker/certs.d/docker-registry.example.com:5000 # Copy certificate (get from VM-001 first) sudo cp /srv/certs/registry.crt \ /etc/docker/certs.d/docker-registry.example.com:5000/ca.crt # Restart Docker sudo systemctl restart docker EOF done |
Test Registry Access
|
1 2 3 4 5 6 7 8 9 10 11 |
# Pull a test image sudo docker pull hello-world # Tag for private registry sudo docker tag hello-world docker-registry.example.com:5000/hello-world:latest # Push to registry sudo docker push docker-registry.example.com:5000/hello-world:latest # Test pull from registry sudo docker pull docker-registry.example.com:5000/hello-world:latest |
Internal DNS Configuration
Set up Bind9 for internal DNS resolution to make services easily accessible:
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 |
# Create bind9 stack cat << 'EOF' > dns-stack.yml version: '3.7' services: bind9: image: internetsystemsconsortium/bind9:9.16 environment: TZ: 'America/New_York' ports: - target: 53 published: 53 protocol: tcp mode: host - target: 53 published: 53 protocol: udp mode: host volumes: - /srv/bind9/config:/etc/bind - /srv/bind9/cache:/var/cache/bind - /srv/bind9/records:/var/lib/bind networks: - spacely-engineering-network deploy: mode: global placement: constraints: - node.labels.role == manager restart_policy: condition: on-failure networks: spacely-engineering-network: external: true EOF # Create DNS configuration sudo mkdir -p /srv/bind9/config cat << 'EOF' | sudo tee /srv/bind9/config/named.conf.local zone "example.com" { type master; file "/var/lib/bind/db.example.com"; }; EOF # Create zone file cat << 'EOF' | sudo tee /srv/bind9/records/db.example.com $TTL 604800 @ IN SOA ns1.example.com. admin.example.com. ( 2021010101 ; Serial 604800 ; Refresh 86400 ; Retry 2419200 ; Expire 604800 ) ; Negative Cache TTL ; @ IN NS ns1.example.com. ns1 IN A 10.0.0.4 ; Service records gitlab IN A 10.0.250.10 jenkins IN A 10.0.250.11 docker-registry IN A 10.0.250.12 EOF # Deploy DNS service sudo docker stack deploy -c dns-stack.yml dns |
Configure VMs to Use Internal DNS
|
1 2 3 4 5 6 7 8 9 |
# Update DNS settings on all VMs for node in 10.0.0.4 10.0.0.5 10.0.0.6 10.0.0.7 10.0.1.4; do ssh spacely-eng-admin@$node << 'EOF' # Update resolv.conf echo "nameserver 10.0.0.4" | sudo tee /etc/resolv.conf.d/head echo "search example.com" | sudo tee -a /etc/resolv.conf.d/head sudo resolvconf -u EOF done |
Portainer for Management
Deploy Portainer for visual management of the Docker Swarm cluster:
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 |
# Create Portainer stack cat << 'EOF' > portainer-stack.yml version: '3.7' services: agent: image: portainer/agent:latest volumes: - /var/run/docker.sock:/var/run/docker.sock - /var/lib/docker/volumes:/var/lib/docker/volumes networks: - spacely-engineering-network deploy: mode: global placement: constraints: - node.platform.os == linux portainer: image: portainer/portainer-ce:latest command: -H tcp://tasks.agent:9001 --tlsskipverify ports: - target: 9000 published: 9000 protocol: tcp mode: ingress volumes: - /srv/portainer:/data networks: - spacely-engineering-network deploy: mode: replicated replicas: 1 placement: constraints: - node.labels.role == manager networks: spacely-engineering-network: external: true EOF # Deploy Portainer sudo docker stack deploy -c portainer-stack.yml portainer # Access at http://10.0.0.4:9000 through VPN |
Service Health Verification
Verify all services are running correctly:
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 |
# Check service status sudo docker service ls # Expected services: # gitlab_gitlab # registry_registry # dns_bind9 # portainer_portainer # portainer_agent # Check service logs if needed sudo docker service logs [service_name] --tail 50 # Test service accessibility curl -I http://gitlab.example.com curl -k https://docker-registry.example.com:5000/v2/ # Verify DNS resolution nslookup gitlab.example.com nslookup jenkins.example.com nslookup docker-registry.example.com |
Next Steps
We’ve successfully configured Docker Swarm, deployed GitLab for version control, and set up a private Docker Registry. The foundation of our CI/CD infrastructure is now in place.
Continue to Part 4: Jenkins Configuration and Complete Workflow, where we’ll set up Jenkins with Blue Ocean, configure the CI/CD pipeline, and demonstrate the complete development workflow from code commit to deployment.
This is Part 3 of a 4-part series on setting up a private CI/CD solution in Azure.
