Home k8s part 2 deploying a web app
Post
Cancel

k8s part 2 deploying a web app

Introduction

Last week we looked at how to install and configure kubernetes, this time we’ll see how to deploy a web app with it. There are a lot of concepts to go through and I’m quite new in this field so bear with me.

Install zsh and zsh completion

I’ve gotten used to zsh and the fuzzy history finder fzf so I’ll have to set them up on the jumphost before we start frantically typing kubectl commands.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
root@jumpbox:$ apt update
root@jumpbox:$ apt install zsh

# Make a completions folder
root@jumpbox:$ mkdir -p ~/.zsh/completions

# Generate kubectl completions into it
root@jumpbox:$ kubectl completion zsh > ~/.zsh/completions/_kubec

root@jumpbox:$ apt install fzf
root@jumpbox:$ cat <<'EOF' >> ~/.zshrc

# Enable Zsh completion system
autoload -Uz compinit
compinit

# kubectl completion
source <(kubectl completion zsh)
source <(fzf --zsh)

# If you use alias 'k' for kubectl
alias k=kubectl
compdef __start_kubectl k

EOF

root@jumpbox:$ source ~/.zshrc

If you haven’t tried it yet I can tell you you’re missing out on some huge time savers on the command line. fzf allows you to find previous commands by inputting partial elements. How awesome is that? You can of course use the plain reverse-i-search which is included in bash but it doesn’t match partials.

iframe

Namespaces

Every kubernetes component is able to be assigned into a namespace except volumes. Namespaces are useful to assign quotas, permissions and prevent overriding existing deployments because some other team used the same name.

1
root@jumpbox:$ kubectl apply -f mongo-secret.yaml --namespace=other-namespace

or set the namespace flag on the meta property

1
root@jumpbox:$ kubectl config set-context --current --namespace=other-namespace

Setting up a mongo db and a web app

1. The database

The order of creation matters. Before setting up the deployment we need to reference the secret

1
2
3
4
root@jumpbox:$ echo -n 'username' | base64
dXNlcm5hbWU=
root@jumpbox:$ echo -n 'password' | base64
cGFzc3dvcmQ=

and add it to the mongo-secret.yaml

1
root@jumpbox:$ kubectl apply -f https://raw.githubusercontent.com/hupratt/kubernetes-the-hard-way/refs/heads/master/kubeconfiguration/mongo-secret.yml

Next we’ll configure the database itself: mongo.yaml and create the service so that I can reference the pod by its service name and do name resolution as opposed to IP addressing.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
root@jumpbox:$ kubectl apply -f https://raw.githubusercontent.com/hupratt/kubernetes-the-hard-way/refs/heads/master/kubeconfiguration/mongo.yml
root@jumpbox:$ kubectl apply -f https://raw.githubusercontent.com/hupratt/kubernetes-the-hard-way/refs/heads/master/kubeconfiguration/mongo-svc.yml
root@jumpbox:$ kubectl get all
NAME                                      READY   STATUS    RESTARTS   AGE
pod/mongodb-deployment-6d9d7c68f6-mhmp7   1/1     Running   0          15m

NAME                      TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)     AGE
service/kubernetes        ClusterIP   10.0.0.1     <none>        443/TCP     5d19h
service/mongodb-service   ClusterIP   10.0.0.176   <none>        27017/TCP   20s

NAME                                 READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/mongodb-deployment   1/1     1            1           15m

NAME                                            DESIRED   CURRENT   READY   AGE
replicaset.apps/mongodb-deployment-6d9d7c68f6   1         1         1       15m

2. DNS

We deployed a database on the last section and if we try to deploy our web application it will fail because there is no address resolution. Sure you could hard code those in the /etc/hosts files and do name address resolution based on those but let’s see a better way of doing this. If we attached a DNS server and allowed it to talk to our api on the control plane it could automatically discover the addresses he sees.

We will deploy a DNS server on the hostnetwork of each node which we previously configured on our “./configs/kubelet-config.yaml”. Coredns needs permissions to query the k8s api server so the manifest is a bit more complex. It’s easier to let the pods run on the hosts’ network otherwise we’ll have to set up masquerading/NAT to make sure the firewall knows how to reach the pod’s network. By NATing, the worker node replaces the pods’ ip in the outgoing request with its own IP. Without masquerading the firewall would receive a request from the two pod networks (10.200.0.0/24 and 10.200.1.0/24) and not know what to do with them which would result in an error.

1
2
3
4
5
root@jumpbox:$ kubectl apply -f https://raw.githubusercontent.com/hupratt/kubernetes-the-hard-way/refs/heads/master/kubeconfiguration/coredns.yaml
root@jumpbox:$ kubectl get pods -n kube-system -o wide
NAME                    READY   STATUS    RESTARTS      AGE   IP            NODE     NOMINATED NODE   READINESS GATES
coredns-f6c46c4-cs6tw   1/1     Running   1 (17m ago)   18m   192.168.3.82   node01   <none>           <none>
coredns-f6c46c4-rvcw5   1/1     Running   3 (17m ago)   18m   192.168.3.83   node02   <none>           <none>

Verify that the reverse PTR records are set properly and that there is internal and external connectivity. We should also of course be able to query the database service: “mongodb-service” by its name without having to manually type anything

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
root@jumpbox:$ kubectl run -it --rm debug --image=infoblox/dnstools --restart=Never --command -- /bin/sh

dnstools# nslookup 10.0.0.1
1.0.0.10.in-addr.arpa	name = kubernetes.default.svc.cluster.local.

dnstools# nslookup kubernetes
Server:		10.0.0.10
Address:	10.0.0.10#53
Name:	kubernetes.default.svc.cluster.local
Address: 10.0.0.1

dnstools# nslookup telekom.de
Server:		10.0.0.10
Address:	10.0.0.10#53
Non-authoritative answer:
Name:	telekom.de
Address: 63.176.75.230
Name:	telekom.de
Address: 3.75.56.200
Name:	telekom.de
Address: 3.67.146.210

dnstools# nslookup mongodb-service
Server:		10.0.0.10
Address:	10.0.0.10#53

Name:	mongodb-service.default.svc.cluster.local
Address: 10.0.0.118

The worker nodes don’t have systemd-resolved managing the DNS. You could therefore edit the /etc/resolv.conf file directly but it should be set via DHCP automatically based on your DHCP server’s configuration. If you decide to opt for the systemd-resolved route however you should not edit the /etc/resolv.conf and would normally use resolvectl or nmcli (NetworkManager) isntead

3. The node js application

Now, we can deploy the config map, the svc and the node application. We could have set up the database url as follows and avoided using a configmap or a service

1
2
3
4
5
(...)
    name: ME_CONFIG_MONGODB_URL
        value: "mongodb://username:password@mongodb-service:27017/admin"

(...)

but it’s good practice to set up our webserver this way instead as it is more resilient and secure:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
(...)
    - name: ME_CONFIG_MONGODB_ADMINUSERNAME
      valueFrom:
        secretKeyRef:
          name: mongodb-secret
          key: mongo-root-username
    - name: ME_CONFIG_MONGODB_ADMINPASSWORD
      valueFrom:
        secretKeyRef:
          name: mongodb-secret
          key: mongo-root-password
    - name: ME_CONFIG_MONGODB_SERVER
      valueFrom:
        configMapKeyRef:
          name: mongodb-configmap
          key: database_url

(...)

Download the mongo-express.yaml deployment, service, the configmap mongo-configmap.yaml and start the deployment and marvel at our nodejs web server.

1
2
3
4
5
6
7
8
9
10
11
12
root@jumpbox:$ kubectl apply -f https://raw.githubusercontent.com/hupratt/kubernetes-the-hard-way/refs/heads/master/kubeconfiguration/mongo-configmap.yaml
root@jumpbox:$ kubectl apply -f https://raw.githubusercontent.com/hupratt/kubernetes-the-hard-way/refs/heads/master/kubeconfiguration/mongo-express.yaml
root@jumpbox:$ curl -I -u admin:pass http://node01:30000
HTTP/1.1 200 OK
X-Powered-By: Express
Content-Type: text/html; charset=utf-8
Content-Length: 9288
ETag: W/"2448-8C4drgtvtNbipbeBY3oLYnSHr/M"
Set-Cookie: mongo-express=s%3AeE8grO3VkJ3EkBCk5ktP9Og3XAPZa9y-.lAKmOY8U4cp0ugY1MD%2BSN447ohkP72iBml7IS9mAQQ0; Path=/; HttpOnly
Date: Thu, 14 Aug 2025 14:06:25 GMT
Connection: keep-alive
Keep-Alive: timeout=5

You should also be able to see it from your computer by browsing to: http://192.168.3.82:30000/

iframe

Routing

With an ingress instead of defining a nodePort and a “type: LoadBalancer” to make a service accessible from the outside like we did previously:

1
2
3
4
5
6
7
8
9
10
11
12
13
apiVersion: v1
kind: Service
metadata:
  name: mongo-express-service
spec:
  selector:
    app: mongo-express
  type: LoadBalancer
  ports:
    - protocol: TCP
      port: 8081
      targetPort: 8081
      nodePort: 30000

you define an ingress that forwards the packets to your internal service

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: name
  annotations:
    kubernetes.io/ingress.class: "haproxy"
spec:
  rules:
    - host: app.com
      http:
        paths:
        - path: /
          pathType: Exact
          backend:
            service:
              name: my-service
              port: 
                number: 8080

The advantages to using an ingress are for instance that 1) you get cleaner URLs without ports at the end 2) have a centralized point to terminate your SSL connections and renew the certs and 3) we get more possibilities such as rewrite rules, headers and so on.

For this setup to work however you’ll either have to install the helm chart of our ingress controller from the package manager as it is not included in the barebone k8s package we set up earlier

You can view helm charts as kubernetes manifest files with place holders to dynamically configure the deployments and configuration.

1
2
3
4
5
6
root@jumpbox:$ curl https://baltocdn.com/helm/signing.asc | gpg --dearmor -o /usr/share/keyrings/helm.gpg
root@jumpbox:$ echo "deb [signed-by=/usr/share/keyrings/helm.gpg] https://baltocdn.com/helm/stable/debian/ all main" | tee /etc/apt/sources.list.d/helm-stable-debian.list
root@jumpbox:$ apt update && apt install helm
root@jumpbox:$ helm repo add haproxy-ingress https://haproxy-ingress.github.io/charts
root@jumpbox:$ helm repo update
root@jumpbox:$ helm install haproxy-ingress haproxy-ingress/haproxy-ingress

or simply run this if you don’t have/want to install helm just yet

1
2
3
4
root@jumpbox:$ kubectl apply -f https://raw.githubusercontent.com/hupratt/kubernetes-the-hard-way/refs/heads/master/kubeconfiguration/ingress-rbac.yml
root@jumpbox:$ kubectl apply -f https://raw.githubusercontent.com/hupratt/kubernetes-the-hard-way/refs/heads/master/kubeconfiguration/ingress-controller.yml
root@jumpbox:$ kubectl apply -f  
root@jumpbox:$ kubectl get all 

I’ve added the DNS record to my local DNS server with dnsmasq and this is what the http call would look like in case you type: http://k8s.thekor.eu on your browser.

iframe

The client goes to the DNS server k8s.thekor.eu which points to 192.168.3.82 and 192.168.3.83. The worker node01 or worker node02 forwards it to ingress who forwards it to the ingress controller. The request then gets forwarded to the web app’s svc who forwards it to the pod

I’ve opted for dnsmasq as it allows you to specify 2 ip addresses for 1 A record as we don’t have a load balancer just yet.

As a side note, if you have small VMs like me you might consider reducing the memory requirements

1
2
3
4
5
6
7
root@jumpbox:$ kubectl edit deployment haproxy-kubernetes-ingress -n haproxy-controller
resources: 
  limits:
    memory: 256 Mi
  requests:
    memory: 128 Mi

as you debug your deployments these are useful commands to remember

1
2
3
4
5
6
7
8
9
10
11
12
13
14
# kills the pods and spawns new ones
root@jumpbox:$ kubectl rollout restart deployment nginx

# follow the deployment logs
root@jumpbox:$ kubectl logs -f deployment/nginx

# this is useful to see where a pod got stuck 
root@jumpbox:$ kubectl describe pod nginx-pod 

# start a pod to debug DNS
root@jumpbox:$ kubectl run -it --rm debug --image=infoblox/dnstools --restart=Never --command -- /bin/sh

# test reachability with 
root@jumpbox:$ kubectl exec -it haproxy-ingress-c47ff94cf-k7s27 -- curl http://10.200.0.51:8081/

Conclusion

And we have only scratched the surface of k8s. On the next articles I’ll probably dive into to storage, galera clusters, load balancers and the pros and cons of the available k8s CNI solutions out there.

Cheers

This post is licensed under CC BY 4.0 by the author.