Setting Up a Kubernetes Cluster on Raspberry Pi with K3s and MetalLB: A Practical Guide

NP
Nikolay PenkovFebruary 27, 2025

Imagine having the power of a tech giant’s infrastructure right in your living room, quietly humming away on a device smaller than a sandwich. That’s not science fiction — it’s what happens when you marry the humble Raspberry Pi with Kubernetes, the technology that powers some of the world’s most sophisticated digital platforms.


Why Turn Your Raspberry Pi into a Kubernetes Powerhouse?

Remember when learning new technologies meant spending thousands on equipment or renting cloud services? Those days are over. With a $35 Raspberry Pi and some free software, you can build a learning environment that mirrors what Netflix, Spotify, and Google use to serve millions of users.

But why would anyone want to run Kubernetes on such a tiny device? Because sometimes, the best way to learn to sail isn’t on the open ocean — it’s in a pond. Your Raspberry Pi cluster becomes your technology sandbox, where mistakes cost nothing and experimentation is encouraged.

Kubernetes on a Diet

Traditional Kubernetes is like showing up to a bicycle ride in an 18-wheeler — it’s powerful but excessive for smaller environments. That’s where K3s shines.

K3s isn’t just Kubernetes with features removed — it’s Kubernetes reimagined for environments where every megabyte of RAM matters. The entire K3s binary is less than 100MB, compared to standard Kubernetes. This is why your Pi — with its modest hardware specs — can run what normally requires serious server hardware.

Building Your Single-Node Cluster

Step 1: Making First Contact

Don’t let your Raspberry Pi to sit there and get dusty. Install a headless version of RaspbianOS and get started. The first step is establishing communication:

1 ssh <pi-user>@<pi-ip-address>

Step 2: Installing K3s on Your Pi

Installing enterprise software usually involves complex processes and multiple dependencies. K3s breaks that mold with elegance:

1 curl -sfL https://get.k3s.io | sh -s - --write-kubeconfig-mode 644

This single line does several important things:

  • Downloads and installs the K3s binary
  • Sets up K3s as a service
  • Starts K3s with your Raspberry Pi as both a server (control plane) and agent (worker)
  • Creates a kubeconfig file with permissions that allow your regular user to access it (mode 644)

The --write-kubeconfig-mode 644 flag is critical—it ensures that your kubeconfig file at /etc/rancher/k3s/k3s.yaml has the right permissions from the start, allowing you to interact with your cluster without constantly using sudo.

For remote access from other machines, you can copy this file to your workstation and modify the server address to point to your Pi’s IP address.

Step 3: Confirming Your Installation

After bringing K3s to life, you’ll want confirmation that everything is working:

1 kubectl get nodes

When your Raspberry Pi’s name appears in the list, take a moment to reflect — you’ve just created a Kubernetes node on hardware that costs less than a dinner for two.

Step 4: Enabling Resource Management

Run the following command and extend the parameter string with cgroup_memory=1 cgroup_enable=memory .

1 sudo nano /boot/cmdline.txt

By adding cgroup_memory=1 cgroup_enable=memory to this file, you're enabling sophisticated resource management capabilities. After a reboot, your Pi will be able to allocate memory to containers with precision—essential for running multiple applications smoothly in constrained environments.

1 sudo reboot

Step 5: Adding Load Balancing with MetalLB

In cloud environments, load balancers distribute traffic automatically. MetalLB brings this capability to your Raspberry Pi:

1 kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.14.3/config/manifests/metallb-native.yaml

MetalLB will intelligently direct traffic to your applications by exposing your services to the IPs of your local network (instead of the cluster internal network), just like in professional environments. It’s like having a tiny, efficient traffic controller working inside your network.

Step 6: Configuring MetalLB for Your Network

Now we need to tell MetalLB which IP addresses it can use. First, you’ll need to identify your network’s IP range. Most home networks use either 192.168.0.x, 192.168.1.x, or another variation in the 192.168.x.x range.

To discover your network range, you can use:

1 ip -4 addr show | grep inet

Look for the entry that matches your main network interface (often eth0 or wlan0) and note the network range.

Now, create a configuration file for MetalLB:

1 2 3 4 5 6 7 8 9 10 11 12 apiVersion: v1 kind: ConfigMap metadata: namespace: metallb-system name: config data: config: | address-pools: - name: default protocol: layer2 addresses: - 192.168.1.240-192.168.1.250

Important: Replace the IP range with addresses in your own network’s range that aren’t already in use. For example, if your network uses 192.168.1.x addresses, reserve a small range (like .240 to .250) that won’t conflict with your router’s DHCP assignments or other devices. Check your router’s settings to confirm which IPs are safe to use.

Save this configuration to a file named metallb-config.yaml and apply it:

1 kubectl apply -f metallb-config.yaml

Step 7: Deploying Your First Application

Now for the moment of truth — deploying an actual application:

1 2 kubectl create deployment nginx --image=nginx kubectl expose deployment nginx --port=80 --type=LoadBalancer

With these commands, you’ve instructed your Raspberry Pi to download and run the NGINX web server, then make it accessible from other devices on your network. Finally run kubectl get services and see if your nginx service is using an IP from your local network. You can even try navigating to the service IP from your browser or curl as proof that your tiny computer is now performing real orchestration tasks.

Practical Applications

Your Raspberry Pi cluster isn’t just a toy — it’s a fully functional Kubernetes environment capable of running real applications. Consider these possibilities:

  • Home Automation Hub: Deploy Home Assistant in a container, managed by Kubernetes for reliability
  • Personal Media Server: Run Plex or Jellyfin with automatic failover if something crashes
  • Learning Laboratory: Experiment with microservices architectures without cloud costs
  • Continuous Integration System: Set up Jenkins to automatically test your code projects
  • Edge Computing Prototype: Process IoT data locally before sending summaries to the cloud

Each of these projects becomes not just a useful tool but a learning opportunity that builds skills valued in modern technology careers.

##Looking Ahead: Expanding Your Kubernetes Infrastructure

This guide is just the beginning of your Kubernetes journey. In my upcoming videos and posts, we’ll explore how to:

  1. Add More Nodes: Transform your single node into a true cluster by adding additional Raspberry Pis as worker nodes, learning about Kubernetes master-worker architecture along the way.
  2. Implement Monitoring: Set up Prometheus and Grafana to gain visibility into your cluster’s performance, resource usage, and health — an essential skill for any production environment.
  3. Enhance Security: Harden your cluster with proper network policies, Role-Based Access Control (RBAC), and secrets management to protect your applications and data.
  4. Deploy Real-World Applications: Move beyond test deployments by installing and configuring applications that solve actual problems in your home or small office environment.

Conclusion

There’s something deeply satisfying about building complex systems on minimal hardware. While companies spend millions on data centers, you’ve created something functionally similar for the cost of a few coffee shop visits.

We use cookies

We use cookies to ensure you get the best experience on our website. For more information on how we use cookies, please see our cookie policy.