Building My Hybrid Kubernetes Cluster II
About six months ago I wrote about building my hybrid Kubernetes cluster. At that stage it was still evolving (and I was still learning). Some bits added. Bits removed. Some experiments worked. Some were pointless. That’s the fun of it.
Now it feels settled. Intentional.
The cluster today is built around five Raspberry Pi 5s and one Intel NUC. One Pi 5 runs as the primary K3S master at 192.168.0.10. Four Pi 5s run as dedicated workers. The Intel NUC at 192.168.0.20 is also a worker, but it’s my heavy compute node as well. Everything is 64-bit. Everything is running properly off SSD storage.
Each Raspberry Pi 5 has 8GB of RAM and its own 500GB SSD. No more relying on microSD cards for anything serious. SD cards are fine for a weekend project. They are not fine for a distributed system that is constantly writing logs, pulling containers, running storage backends and shifting data around. Moving to SSDs made the whole cluster feel solid. Faster container pulls. Faster restarts. Less random weirdness.
The Intel laptop is gone from the cluster for now. It worked, but it wasn’t necessary. The PCDuinos are also gone. They were interesting boards, but the performance per watt just didn’t justify their place in the rack. The Pi 5 absolutely outclasses them. More RAM. Faster CPU. Better storage options. Less compromise.
The structure is simple now.
1x Pi 5 as master.
4x Pi 5s as ARM64 worker nodes.
1x Intel NUC as x64 worker and AI engine.
That’s it now... at least for now... I've scaled enough so far for my current purposes.
The Pi nodes handle the distributed backbone of the network. DNS. Home Assistant. Minecraft Bedrock. Dashboards. N8N automation. Ray experiments. Longhorn storage, Home Assistant. All the containerised services that make my house feel like a tiny data centre.
The NUC is where Ollama lives. Larger LLM models sit there on it's 1TB SSD with 16GB of RAM behind them. When I’m running heavier AI workloads, embeddings, local assistants, or anything that would choke an ARM board, the NUC takes it. Kubernetes schedules intelligently, and I can control placement when needed. ARM for efficiency. x64 for muscle.
I like that split. It makes sense.
The Pis are incredibly efficient. Low power. Quiet. Always on. Spread workloads across four of them and they barely break a sweat. If one goes down, the cluster carries on. Pods reschedule. Services recover. There’s something satisfying about watching that happen automatically.
Networking is clean too. Everything sits neatly in my 192.168.0.x range. MetalLB hands out service IPs so I can access cluster services just like normal devices on the network. Ollama is exposed cleanly. The Kubernetes dashboard has its own address. OpenWebUI has its own IP. N8N has its own IP. It feels structured rather than hacked together.
Storage was a major decision. Each Pi having its own SSD means I can run distributed storage properly without worrying about fragile flash media. Logs, databases, persistent volumes, model caching — all of it sits on real drives now. The NUC’s 1TB SSD gives me breathing room for large models and datasets without constantly trimming space.
Power usage was also part of the thinking. Five Pi 5s plus a NUC is still far more efficient than running a stack of old desktops or random mismatched hardware. It’s compact. Predictable. Quiet. Yet it gives me a mixed-architecture cluster that can run almost anything I throw at it.
What I enjoy most is the control.
I built it. I wired it. I configured it. I broke it so many times and rebuilt it as many times. I’ve wrestled with node affinity, storage issues, networking glitches, MetalLB quirks, and service restarts. And every time something failed, I learned something.
This isn’t just about running containers. It’s about understanding distributed systems from the inside. Seeing how scheduling works. Seeing how ARM and x64 coexist in the same cluster. Watching pods move between nodes. Knowing exactly which IP maps to which device in my house.
It feels less like a hobby experiment now and more like a deliberate home infrastructure platform. A hybrid ARM and Intel cluster sitting quietly behind the scenes, running automation, AI, services and experiments 24/7.
And the best part is it’s still flexible. Add another Pi with AI Hat perhaps. Reconfigure the roles. Spin up a new project at midnight just because I feel like it.
It’s my own small distributed universe. Not in a data centre. Not in the cloud. Right here on my network.
And honestly, there’s something deeply satisfying about that.

Comments
Post a Comment