Building My Hybrid Kubernetes Cluster
Building My Hybrid Kubernetes Cluster
Over the
last couple of weeks, I’ve been steadily building up my own Kubernetes (k3s)
cluster at home. It’s a mix of new Raspberry Pi 5 boards, some older pcDuino
Nano boards, and a laptop I’ve turned into a dedicated compute node.
It hasn’t
been smooth sailing, but every hurdle has been a learning curve. Here’s the
story so far.
Current Cluster Setup
Here’s
what the cluster looks like right now:
- MASTER: Raspberry Pi 5 (8 GB RAM,
500 GB SSD)
- NODE01: Raspberry Pi 5 (8 GB RAM,
500 GB SSD)
- NODE02 (coming soon): Raspberry Pi 5 (8 GB RAM,
SSD will be moved over from a pcDuino)
- NODE03: pcDuino Nano (1 GB RAM, 500
GB SSD)
- NODE04: pcDuino Nano (1 GB RAM, 500
GB SSD)
- AIHub: Intel i7-4xxx (16 GB RAM, 1
TB SSD, Ubuntu; acting as another node alongside the Pis)
So, it’s
a hybrid cluster: ARM64 Pis, some old 32-bit ARM pcDuinos, and an x86_64 Intel
machine all working together under one Kubernetes umbrella.
![]() |
| i7 4xxx 8core Laptop |
![]() |
| pcDuino Nanos with SSD |
![]() |
| Raspberry Pi 5 with SSD |
![]() |
| Heatsinks in my Pi5. Case has active cooled fan |
Week One: Laying the Foundations
Setting up the Master (Raspberry Pi 5)
The
journey started with a fresh Pi 5 running Debian Bookworm. I hooked up a 500 GB
SSD and installed k3s to make it the MASTER node. This node is the brain
of the cluster. It manages scheduling, workloads, and coordination.
The
install went mostly to plan, though I did need a couple of kernel updates to make
sure the SSD was happy on USB.
Adding NODE01 (Second Raspberry Pi 5)
Next came
NODE01, another Pi 5. I flashed the OS, updated it, and joined it to the
MASTER. At first, it didn’t show up as a proper worker.
Turned
out I hadn’t fully applied the join token, and Kubernetes got confused about
its role. After fixing that in the config, it snapped into line. That gave me a
solid ARM64 foundation: MASTER + NODE01.
Week Two: The Challenges Begin
Wrestling with the pcDuinos
This was
where things got messy. I wanted to include my two old pcDuino Nano boards
(NODE03 and NODE04). They only have 1 GB of RAM each and run on ARMv7 (32-bit),
so they were never going to be heavy lifters.
The
problems:
- They crashed under anything
demanding.
- The k3s install needed the
ARM32 build.
- Networking was patchy until
configs were fixed. This was also my failing identify boards correctly.
The
fixes:
- Marked them with node
taints and labels so Kubernetes won’t schedule heavy jobs on them.
- Hooked them up to SSDs for
persistent storage.
- Limited them to lightweight
tasks only.
After a
lot of fiddling, they stabilised. They’re not fast, but they add some extra
capacity and storage.
Bringing AIHub Into the Mix
My Ubuntu
laptop (Intel i7, 16 GB RAM) became AIHub, another node in the cluster. I wasn't using this 10 year old laptop much anymore. I had installed Ubuntu 24.04LTS on there and as a few keys on keyboard were non functional (I don't think I'll replace it) I decided to just keep it running and then installed it in the cluster as an additional work node.
The big
headache here was mixing ARM (Pi 5s, pcDuinos) with x86_64 (AIHub).
Not all containers run on every architecture, so workloads had to be carefully
directed.
Solution:
I used Kubernetes labels and selectors to pin jobs where they belong. Heavy AI
workloads go to AIHub, while the Pis handle orchestration and light jobs.
Persistent Storage
Once the
nodes were up, I needed to make sure storage persisted across reboots. Each
node had its own SSD, so I configured persistent volumes (PVs) in
Kubernetes.
The main
issue was paths: at one point, pods couldn’t see the SSDs because the YAML
definitions were pointing to the wrong directories. Fixing the hostPath and volumeMounts sorted it out.
I also
renamed PVs (e.g., MASTER-pv, NODE01-pv) to keep things neat.
Where Things Stand Now
By the
end of the second week, the cluster finally felt solid:
- MASTER + NODE01: Reliable ARM64 backbone
- NODE03 + NODE04: Legacy support nodes with
SSDs
- AIHub: The heavy lifter for
compute and AI models
And soon,
NODE02 - another Pi 5 with SSD(which I will take from one of the pcDuino Nodes)
will slot in. That’ll give me three strong Pi 5 workers alongside AIHub and the
2x light workload pcDuinos
Pitfalls & Lessons Learned
- Old hardware can be
integrated, but only carefully. The pcDuinos proved it’s possible, but you
need to restrict them to light roles.
- Mixed architectures need
planning.
Some workloads simply won’t run on ARM32 or ARM64, so labels and selectors
are essential.
- Persistent storage is
fiddly. If
paths aren’t exact, pods won’t see volumes. Standardising names helped
avoid confusion.
- Joining nodes isn’t always
straightforward.
Even when following the guide, tokens and configs need double-checking.
What’s Next
- Add NODE02 (Pi 5 & 8 GB)
and migrate a SSD from a pcDuino Node.
- Run benchmarks to measure
how each node contributes.
- Experiment with distributed
workloads like Einstein@Home.
- Continue testing LLM serving
between AIHub and Pis.
- Move to static IP
addressing across the cluster for stability.
- VESA mounts to Pi Cases. Just placed better in an enclosure or frame with SSDs etc.
Closing Thoughts
Two weeks
ago, this project was just a single Raspberry Pi. Now it’s a hybrid
Kubernetes cluster with five nodes online and another coming soon.
It’s been
equal parts frustrating and rewarding, but that’s the fun of home-lab tinkering.
You learn something new at every step. This mix of old and new hardware proves
that Kubernetes really can bring together almost anything, as long as you’re
willing to get your hands dirty.








Comments
Post a Comment