Home Load balancing a webserver with haproxy
Post
Cancel

Load balancing a webserver with haproxy

Introduction

The setup is running on a single machine as a proof of concept. We can move those docker instances into individual virtual machines once we are happy with the results and actually need to load balance the incoming traffic. Haproxy is a popular ingress used in k8s so I thought it would be a good idea to set this up to get familiar with the configuration.

In order to configure this system there are a couple of components at play:

  1. /etc/letsencrypt/credentials.ini: a configuration file with an authorization key for the DNS challenge
  2. Two docker stacks with their corresponding configuration files as shown in the tree diagram below
  3. A custom systemd daemon that wraps the certbot’s compose stack
  4. A custom systemd timer that actually triggers the certbot compose stack
1
2
3
4
5
6
7
8
9
10
11
├── certbot
│   └── docker-compose.yml
└── haproxy
    ├── docker-compose.yml
    ├── haproxy
    │   └── haproxy.cfg
    ├── haproxy-lb
    │   └── haproxy.cfg
    └── nginx
        └── html
            └── index.html

1. Configuring haproxy

We’ll start by defining a dummy web server that answers with a simple html page to incoming http requests on the root of the server

1
user@server:$ cat ./haproxy/nginx/html/index.html 
1
<h1>Welcome to nginx backend</h1>

Then we’ll define a compose stack that orchestrates a load balancer, 2 haproxies and a dummy nginx application that returns the html page we defined above.

1
user@server:$ cat ./haproxy/docker-compose.yml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
services:
  haproxy-lb:
    image: haproxy:alpine
    container_name: haproxy-lb
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - ./haproxy-lb/haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg:r
      - certbot_ssl:/etc/letsencrypt/:ro,Z
    depends_on:
      - nginx1
      - haproxy1
      - haproxy2

  haproxy1:
    image: haproxy:alpine
    restart: unless-stopped
    container_name: haproxy1
    volumes:
      - ./haproxy/haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg:ro
    depends_on:
      - nginx1

  haproxy2:
    image: haproxy:alpine
    restart: unless-stopped
    container_name: haproxy2
    volumes:
      - ./haproxy/haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg:ro
    depends_on:
      - nginx1

  nginx1:
    image: nginx:alpine
    container_name: nginx1
    volumes:
      - ./nginx/html:/usr/share/nginx/html:ro

volumes:
  certbot_ssl:
    external: true

networks:
  default:
    driver: bridge
    driver_opts:
      com.docker.network.driver.mtu: 1450

The worker node that port forwards the request to the nginx reads as follows:

1
user@server:$ cat ./haproxy/haproxy/haproxy.cfg 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
global
  daemon
  maxconn 256
  log stdout format raw daemon

defaults
  timeout connect 5s
  timeout client  10s
  timeout server  10s
  log global

frontend http-in
    bind *:80
    mode http
    option forwardfor
    default_backend nginx-backend

backend nginx-backend
    mode http
    option httpchk GET /
    server nginx1 nginx1:80 check inter 60s

The control plane/master node that port forwards the request to our two haproxies should read as follows:

1
user@server:$ cat ./haproxy/haproxy-lb/haproxy.cfg 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
global
  daemon
  maxconn 256
  log stdout format raw daemon

defaults
    mode http
    timeout client 10s
    timeout connect 5s
    timeout server 10s
    timeout http-request 10s

frontend http-in
    bind *:80
    redirect scheme https code 301 if !{ ssl_fc }

frontend https-in
    bind *:443 ssl crt /etc/letsencrypt/live/example.com/haproxy.pem
    mode http
    default_backend my_backend

backend my_backend
    mode http
    balance leastconn
    server haproxy1 haproxy1:80
    server haproxy2 haproxy2:80

And that’s about it. Now we’ll need to configure the TLS certificates so that the communication is encrypted accross the internet.

2. Configuring certbot

As explained earlier this is simply a configuration file with an authorization key for the DNS challenge back at hetzner. This avoids us the hassle of opening a tcp/80 port to the internet.

1
user@server:$ cat /etc/letsencrypt/credentials.ini
1
2
dns_hetzner_api_token = <redacted>

A docker stack that will act as an agent and trigger TLS certificate creations or renewals

1
user@server:$ cat ./certbot/docker-compose.yml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
services:
  certonly:
    image: inetsoftware/certbot-dns-hetzner
    command: certonly -n --agree-tos --email git@example.com --authenticator dns-hetzner --dns-hetzner-credentials /etc/letsencrypt/credentials.ini --dns-hetzner-propagation-seconds=30 -d '*.example.com' --deploy-hook "cat /etc/letsencrypt/live/example.com/fullchain.pem /etc/letsencrypt/live/example.com/privkey.pem > /etc/letsencrypt/live/example.com/haproxy.pem"
    volumes:
      - ssl:/etc/letsencrypt
      - /etc/letsencrypt/credentials.ini:/etc/letsencrypt/credentials.ini:Z
  renew:
    image: inetsoftware/certbot-dns-hetzner
    command: renew
    volumes:
      - ssl:/etc/letsencrypt
      - /etc/letsencrypt/credentials.ini:/etc/letsencrypt/credentials.ini:Z

volumes:
  ssl:

networks:
  default:
    driver: bridge
    driver_opts:
      com.docker.network.driver.mtu: 1450

The stack above starts and exits once it’s done so you need to configure the next 2 components to make sure they get triggered regularly

1
user@server:$ cat /etc/systemd/system/certbot.service
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
[Unit]
Description=Certbot Renewal

After=docker.service
Requires=docker.service

[Service]
WorkingDirectory=/var/container/certbot
ExecStart=/bin/docker compose up renew
#ExecStop=/bin/docker compose down renew
Restart=on-failure
RestartSec=5s

[Install]
WantedBy=multi-user.target
1
user@server:$ systemctl enable --now certbot.service
1
user@server:$ cat /etc/systemd/system/certbot.timer
1
2
3
4
5
6
7
8
9
10
[Unit]
Description=Run certbot weekly

[Timer]
Unit=certbot.service
OnCalendar=weekly
Persistent=true

[Install]
WantedBy=timers.target

And we shouldn’t forget to start and persist the service so that it doesn’t vanish on reboot

1
2
3
user@server:$ systemctl daemon-reexec
user@server:$ systemctl daemon-reload
user@server:$ systemctl enable --now certbot.timer
This post is licensed under CC BY 4.0 by the author.