Introduction

Currently I have a server that hosts a bunch of services for my home lab. The infrastructure architecture is pretty simple:

  • The services are deployed in docker containers
  • The services are exposed through Traefik

As an exercise I decided to move all those services to kubernetes.

The process

  1. I tried to have everything set with minikube on my local machine. However I’ve found tons of complexities when trying to run traefik within minikube (ingresses not working, ports not working for some reason, etc…)
  2. After that, I settled on trying K3S, which is more lightweight, apparently tailored to run on low end machines and home labs (my case), and the best part of it is that it comes bundled with Traefik as reverse proxy
  3. I tried K3D, which is the same as K3S, but running the cluster within a docker container. It’s not much different that K3S and poses some challenges with port binding and forwarding to expose the services, so I decided to discard that and keep it simple with just K3S.
  4. I managed to deploy a sample service and route it with traefik, getting ready to migrate a bigger service like Komga
  5. Finally I managed to deploy Komga and access it via traefik.

Cheatsheets

Make kubectl work with k3s

When you install k3s and kubectl, you need to run this to bind kubectl to k3s:

sudo cp /etc/rancher/k3s/k3s.yaml ~/.kube/config

Helm vs Kustomize

Reading reddit, the main consensus seems to use Helm as much as possible instead.

Useful Make recipes

install:  
    helm install komga komga  
  
upgrade:  
    helm upgrade komga komga  
  
delete:  
    helm uninstall komga
 
pause:  
    kubectl scale deployment komga -n komga --replicas=0  
  
resume:  
    kubectl scale deployment komga -n komga --replicas=1
 
expose-grafana:  
    $(eval POD_NAME=$(shell kubectl --namespace monitoring get svc -l "app.kubernetes.io/name=grafana,app.kubernetes.io/instance=monitoring" -oname))  
    kubectl port-forward $(POD_NAME) 3001:80 -n monitoring
 
get-dashboard-password:
	kubectl get secret --namespace monitoring monitoring-grafana -o jsonpath="{.data.admin-password}" | base64 -d

Overriding rancher configs

If you add any files to /var/lib/rancher/k3s/server/manifests/ they will be run at startup

Adding Prometheus and Grafana

Installing grafana and prometheus

helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update
helm install monitoring prometheus-community/kube-prometheus-stack \
  --namespace monitoring --create-namespace

This installs:

  • Prometheus
  • Grafana
  • Node Exporter (host metrics)
  • kube-state-metrics
  • Alertmanager

Using the make expose-grafana recipe you can expose the Grafana dashboard on http://localhost:3001. The user is admin, and the password can be obtained with make get-dashboard-password recipe.

Adding metrics to a service

apiVersion: v1
kind: Service
metadata:
  name: myapp
  namespace: default
  annotations:
    prometheus.io/scrape: "true"
    prometheus.io/path: "/metrics"
    prometheus.io/port: "8080"
spec:
  selector:
    app: myapp
  ports:
    - protocol: TCP
      port: 8080
      targetPort: 8080

Testing the access to the kubernetes API

Reference: https://chatgpt.com/c/682ae7d7-40c8-8012-82a8-fc3e70d49203
  • The token is available within the pod in /var/run/secrets/kubernetes.io/serviceaccount/token
  • You can access the pod with
kubectl exec -n komga -it $(kubectl get pod -l app=komga -n komga -o jsonpath="{.items[0].metadata.name}") -- sh
  • Test it:
TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)
APISERVER=https://kubernetes.default.svc
 
curl -s --cacert /var/run/secrets/kubernetes.io/serviceaccount/ca.crt \
     -H "Authorization: Bearer $TOKEN" \
     $APISERVER/api/v1/namespaces/komga/pods
  • If you don’t create a service account, the previous request will fail. Adding a service account to a pod:

Yaml definitions in a nutshell

let me see if I understand, and correct me if I make a mistake on my understanding: A deployment declares the pods that it wants to deploy, and how many replicas of that pod. A pod can have several containers that execute a service. A service definition groups or unites all the pods thanks to the `selector` declaration, and exposes the ports of those pods, essentially load-balancing among them. And an ingress definition exposes the service to a reverse proxy so that it's accessible externally

David, that’s an excellent summary — you’re 95% on point! Just a few tweaks to make it perfectly accurate. Here’s your version with clarifications inline:


A Deployment declares the Pods that it wants to deploy, and how many replicas of that Pod.

Correct.
The Deployment also watches over those Pods, ensuring they’re replaced if they crash or change. It uses a ReplicaSet under the hood, but you rarely deal with that directly.


A Pod can have several containers that execute a service.

✅ Yes, with a small note:
While a Pod can have multiple containers, this is rare in practice — usually done when containers need to share the same network and storage space, like:

  • Main app + sidecar (e.g. logging agent, proxy, init container)

Each Pod has:

  • 1 IP

  • Shared volumes

  • Containers in the same namespace


A Service definition groups or unites all the Pods thanks to the selector declaration, and exposes the ports of those Pods, essentially load-balancing among them.

Perfect.

Just to expand:

  • The Service creates a stable virtual IP and DNS name (my-service.default.svc.cluster.local)

  • Internally load balances across matching Pods via kube-proxy (iptables or IPVS rules)


And an Ingress definition exposes the Service to a reverse proxy so that it’s accessible externally.

✅ Yes — just one important clarification:

  • An Ingress is just a rule set.
  • It doesn’t do anything by itself.
  • You need an Ingress Controller (like Traefik, NGINX, Istio, etc.) to interpret those rules and handle the actual HTTP(S) traffic.

So:

ComponentWhat it does
ServiceInternal load balancer to Pods
IngressHTTP(S) routing rules
Ingress ControllerReverse proxy implementing those rules

🧭 TL;DR version (reworded):

  • Deployment: declares how many Pods of a certain spec to run and keeps them alive.
  • Pod: runs one or more tightly coupled containers.
  • Service: groups matching Pods and load balances traffic to them.
  • Ingress: defines HTTP routing to Services — but needs an Ingress Controller to work.

Here’s a conceptual diagram of how Deployments, Pods, Services, and Ingress work together in a Kubernetes cluster:

┌────────────────────────────────────────────────────────────────────────────┐
│                                Ingress                                     │
│                         (e.g. route: /api → service/api)                  │
└────────────────────────────────────────────────────────────────────────────┘
                                      │
                                      ▼
                          ┌─────────────────────────┐
                          │ Ingress Controller      │
                          │ (e.g. Traefik, NGINX)   │
                          └─────────────────────────┘
                                      │
                                      ▼
                           ┌────────────────────┐
                           │    Service: api    │◀────┐
                           │ selector: app=api  │     │
                           └────────────────────┘     │
                                 │   ▲                │
     Load balances between       │   │                │
        matching Pods            ▼   │                │
                         ┌──────────────┐            │
                         │   Pod (1)    │◀─────┐      │
                         │ app: api     │      │      │
                         │ container(s) │      │      │
                         └──────────────┘      │      │
                         ┌──────────────┐      │      │
                         │   Pod (2)    │◀─────┼───────┘
                         │ app: api     │      │
                         │ container(s) │      │
                         └──────────────┘      │
                         ┌──────────────┐      │
                         │   Pod (3)    │◀─────┘
                         │ app: api     │
                         │ container(s) │
                         └──────────────┘

                              ▲
                              │
               ┌────────────────────────────┐
               │      Deployment: api       │
               │ replicas: 3, template w/   │
               │   labels: app: api         │
               └────────────────────────────┘

🔍 Key Points:

  • Deployment creates ReplicaSet, which creates Pods with matching labels (app=api).
  • Service selects Pods using a label selector and load balances traffic.
  • Ingress Controller reads the Ingress rules and sends HTTP traffic to the appropriate Service.
  • Ingress is just configuration — not the proxy itself.

Let me know if you’d like this diagram as a downloadable image or tailored to your actual k3s + Traefik setup.

Next steps to learn

  • Persistent storage with local-path-provisioner or NFS.
  • Secrets and ConfigMaps for app configuration.
  • Helm charts for simplifying complex deployments (like Komga, Jellyfin, etc).
  • Monitoring (e.g., Prometheus + Grafana).
  • Certificates with Let’s Encrypt via Traefik.
  • Automated backups for manifests and PVs.