February 1, 2026
My Over-Engineered Homelab
Come join me as I misuse a bunch of networking and cloud deployment tools
I have finally decided to jump on the trend of homelabbing. I’m having the same feelings I used to have as a kid when building toys from parts I tore out of other toys. I am building shit, and shit is tangible!
The main reason I am building out somewhat of a complex system is to scratch that itch, but more importantly, I am learning how systems are built. Have you ever wondered how people over at AWS put together elaborate systems that can be provisioned through software? Then you’d love to tinker with Proxmox. You can set up a cluster of computers and run VMs across them with just the web UI or CLI.

The foundation: Proxmox, VLANs, and a virtual router
Before Kubernetes, GitOps, or media servers, there’s one thing that everything in my homelab depends on: networking. I’m running Proxmox as the base hypervisor across my machines, and instead of letting everything sit on my home LAN, I split things up using VLANs and a pfSense router running as a VM.
Before you judge
I can already tell someone’s blood is boiling out there reading about my router in a VM. But security and isolation is not the only reason I am using it. The main reason is because I don’t currently have my own router, and the router my ISP (Spectrum) gave me does not have much customization options. So instead of letting my k3s nodes commit the crime of IP squatting, I just gave them their own world. All Kubernetes nodes use the pfSense VM as their default gateway, and all outbound traffic flows through it.
And honestly? It’s been rock solid. For a home lab, it’s been working just fine.
Cloud images and cloud-init
OK I have to confess, I only learned about cloud-init while setting up my proxmox cluster. How did I never stumble into it? I have no idea. But it’s been yuge! It’s no Terraform but it’s pretty nifty for quick setup. I know, I know, I learned Terraform before I found out cloud-init. It’s been a rather… wild ride? Way too much to learn. Dog gammit!
What’s next?
Well, next up you might want to run a Kubernetes cluster using k3s, a light-weight and batteries-included Kubernetes solution that works well in a homelab setup! I’ve even used it in production environments before.
Why?
Because it’s hella fun! I am running my own media services for movies, TV shows, books, and much more. Have you ever wanted to host your own website for your collection of PDFs? Check out Kavita. You can keep track of your place in the document, manage categories, reading lists, and authors.
💡 P.S. My media servers all contain content that’s in the public domain. All legal stuff. 😛
The stuff I’m actually running
Some of the services I am running on my homelab cluster.
- Jellyfin: For all my movies and TV shows. It's like my own personal Netflix.
- Sonarr, Radarr, Readarr, and Prowlarr: The "arr" stack. These guys automate my media collection. They find new releases, download them, and organize them for me.
- qBittorrent: The workhorse that downloads everything.
- Jellyseerr: A request management system for Jellyfin. My friends and family can request new content, and it gets automatically added to Sonarr or Radarr.
- Kavita: For my ebook and PDF collection. It's a fantastic reader with a great interface.
- Homepage: A beautiful and simple dashboard to have a quick glance at all my services.
- FlareSolverr: A proxy to bypass Cloudflare's anti-bot protection for some of the services.
How is it set up?
At the core, it’s all about GitOps. It’s when the source of truth for your apps lives inside a git repository, and changes to the repo are reflected in your system. There are many tools for this, but I’ve chosen one with a decent UI because I love to see them deployments go 🚀 😂
So, I went with ArgoCD for my GitOps setup. It may be overkill for a homelab, but look at that tree of deployments do their thing, why would you not want to see that?


App-of-apps: one app to rule them all
I've set it up with an app-of-apps pattern. I have a root app in ArgoCD that points to my Git repo. This root app doesn't deploy my services directly. Instead, it deploys other ArgoCD apps.
How I make it reachable (without opening my router up to the entire planet)
So how does one access my services over the interwebs? I’m glad you didn’t ask, cuz I’mma tell you either way.
Step 1: Cloudflare Tunnel — sneak traffic into the cluster
It all starts with Cloudflare Tunnels. This creates a secure tunnel from Cloudflare's network directly to my cluster. This means I don't have to expose any ports on my router. It's super secure and makes my life a whole lot easier.
Step 2: MetalLB — give services a real IP
And finally, MetalLB gives my services a real IP address. Because the cluster lives behind its own routed VLAN, MetalLB can safely hand out IPs without touching my home LAN. In a bare-metal cluster like mine, this is essential. It lets me expose my services as if they were running on a fancy cloud provider.
Step 3: kube-vip — one VIP that refuses to die
On top of that, I use kube-vip to provide a virtual IP (VIP) that can float between k3s nodes. If one node goes down, the VIP can move to another node, so the cluster entry point stays reachable.
Storage, aka “please don’t make me rebuild configs again”
Now for the part that makes all of this feel less like a house of cards: storage.
I run Longhorn on the cluster to get replicated SSD-backed persistent storage for app configs. I set it up so every volume has 3 replicas, which means my apps aren’t stuck to a single node just because their config lives on that node. Instead, every instance talks to the same replicated config storage, and Kubernetes can reschedule things without me sweating bullets.
persistence:
defaultClass: false
defaultSettings:
defaultReplicaCount: 3
replicaSoftAntiAffinity: true
defaultDataPath: /mnt/longhorn
node-drain-policy: always-evict
The annoying caveat: SQLite doesn’t do “shared writes”
There’s a catch though: a bunch of services still use SQLite. Those generally can’t run multiple replicas because concurrent writes will corrupt the database. Longhorn doesn’t magically fix that. But it does mean that a single instance can hop to another node if needed, which is nice for distributing load across nodes.
Next up, I want to clean this up even more—maybe Nix-built images, maybe more automation, maybe more regret. We’ll see. 🫃