Introduction

My first steps into Kubernetes was attending a Kubernetes 101 workshop at the Southern California Linux Expo (SCaLE 18x). As a network engineer, I admit that my goal at the time was only to gain a high-level understanding of the concepts to help support it from a networking perspective. Fast-forward a few months later and I'm building my own Kubernetes cluster on Raspberry Pis as a platform to host a few home projects.

In this blog post, I'll document my initial cluster build-out, discuss my personal hardware choices, and use a few simple Ansible playbooks to setup the cluster nodes along the way. This post is more of a build log than a build guide. If you're looking for a guide to setup your own cluster, you'll find this post helpful, but it's not meant to be followed exclusively.

Why Kubernetes?

Kubernetes takes a lot of the common infrastructure components that are commonly used to support highly available modern applications and packages them up together with a common interface. What do I mean by that? Well, for redundancy and performance, production web applications have generally been hosted on a pool of servers. Since there are multiple servers in a pool, a dedicated load-balancer appliance is then needed to forward client requests to the different servers in the pool. There's also often a need for a reverse-proxy appliance to handle SSL encryption and URL rewrites. Each server also needs to be able to communicate with other resources like a database which also must be highly available. If changes need to be made, they often have to be repeated on each server and can't always be easily reversed. If additional servers need to be added to scale performance or capacity, they also have to be added to the pool on the load-balancer. If a server goes down or becomes problematic, it's not always easy to replace them quickly. Some of these management tasks could be automated, but it's still a lot to manage. Kubernetes is a container orchestration platform that bundles a lot of these components together and makes it simpler to manage applications as deployments (not servers), create load balancers and reverse proxies while still being modular enough to support bare metal and cloud provider integrations.

Using it to host applications at home, I get all the benefits of the redundancy, load balancing, scalability, and self-healing features built-in, without needing to build and manage separate nodes and attempt to build automation myself. I can deploy all of this on a few Raspberry Pis and manage everything by defining objects instead of individually managing containers or VMs. All the while, learning an open-source platform that's now very widely used not only in the cloud, but in data-centers, and even in embedded systems to support micro-services architectures. If you're interested, you can check out all the features at Kubernetes.io.

Cluster Architecture

The first thing to understand is that Kubernetes is deployed as a cluster of one or more computers called nodes. In my case, I'm using Raspberry Pis as nodes, but in cloud environments they're typically VMs. There are two types of nodes-- worker nodes and master nodes.

Worker nodes are the nodes that actually host our containers. They run a process called kube-proxy to control networking, and a process called kubelet which controls the node and it's containers through various APIs. integrations Master nodes run the control plane components and are responsible for managing the worker nodes and tracking their state. I should mention that while master nodes can also simultaneously act as worker nodes by hosting containers and running the control plane components at the same time, this isn't the default in most Kubernetes distributions.

API Objects

Kubernetes is a container orchestration platform which operates on the desired state principle. When we interact with a Kubernetes cluster, we do so using the Kube-API component of the Kubernetes control plane. We define API objects representing our desired state and the Kubernetes control plane continuously tries to make the current state match the desired state. I'll be introducing the API objects as we go, but there are a few basic API objects we should define before we move on:

Pods are objects which are intended to represent a single instance of an application. They are the smallest unit of execution that can be created or deployed in Kubernetes. You can think of a Pod as defining an environment for running a container which also includes the container itself. Although Pods can contain more than one container, it's not common. Pods are also considered ephemeral instances of an application; they can be created, replicated, add deleted as necessary. For example, if a Pod crashes, Kubernetes can just automatically stop the crashed Pod and start a replica Pod to replace it. You can even instruct Kubernetes to deploy a new version of your app this way by incrementally replacing old Pods until they are all replaced with new Pods.

Services are objects that define how Pods should communicate with each other or the outside world. In other words, they allow us to define networking to and from Pods. For example, if our web front-end Pods needed to talk to our back-end Pods, we'd need to define a service object that dynamically "selects" all of our back-end Pods and creates a virtual IP address so that our front-end Pods all have a stable virtual IP that can be used to reach any one of our back-end Pods. I'll be delving services more in depth later in this series.

Volumes are objects associated with Pods that defines storage that is mounted within the containers in a Pod. Normally, the files created within a container would only be available to the container and would only exist while the container is running. If the container is restarted, the files and any changes to those files are lost. Volumes defined with a Pod, provide a way to mount files into a container, in a more persistent way. There are several different types of volumes that Pods can use, with varying levels of persistence. For example, an emptyDir volume, exists for the lifetime of the Pod and would survive a container restart, but not a Pod restart. On the other hand, while a Persistent Volume Claim type volume also lives for the lifetime of the Pod, the actual data exists within a persistent volume resource outside of the Pod. I'll also be talking about this more in depth later as well.

Deployments are objects that define a deployment of Pods. They can be used for new Pod roll-outs, incrementally replacing existing Pods, rolling back to an earlier deployment, scaling up deployments, and pausing deployments. With a deployment you can specify how many replicas of a Pod to deploy (called a ReplicaSet) and Kubernetes will create or remove Pods to scale the deployment accordingly.

Choosing Hardware

Next, let's talk hardware. Here's the list:

I chose to start my cluster with four Raspberry Pis. I already had two existing Pi 3s, but felt that 1GB of RAM would be a limiting factor for what I had planned. I decided to add two additional Pi 4s with 4GBs of RAM each. Hopefully, I'll find an excuse to upgrade to the recently announced 8GB Pi 4 because well, I JUST NEED THEM, OKAY?!

Since the Raspberry Pi's performance is so greatly impacted by the SD card, I also decided to purchase some respectable cards. Priced at around $8 for a 32GB card, there's really no reason not to to. The EVO Select cards I've linked above are the same as EVO+ cards, they're just exclusive Amazon versions. Check out this benchmark article which rates them highly. I also planned on using my existing Synology DS216j NAS as persistent storage for the cluster containers.

For power, I ultimately decided on using dedicated power supplies for each of the Pis and went with the official Raspberry Pi branded ones. There are a few reasons I came to this decision. At first, I was hoping to build a cleaner setup that didn't require so many plugs. I had two options, either purchase a USB charging hub and short cables or use optional PoE HATs with a PoE switch. While the newer Raspberry Pi 4 Model Bs have higher stated power requirements than previous models, they can still be run with a minimum 5V at 2.5A as long as you aren't also powering other peripherals. That being said, I still had a very difficult time finding a reputable USB charging hub that was able to simultaneously provide enough power to each Pi when they would all be plugged in. Many of these hubs only provide the required amperage when only some of the ports are in use. I also anticipated that it might be challenging to find cables that would work because of the known USB-C cable compatibility problem in the v1.0 revision of the Pi 4. The other option was to power the Pis with the optional PoE HAT (which also provides the Pi 4 minimum of 5V at 2.5A). However, since the PoE HAT only works on the the Pi 3 Model B+, I would have had to upgrade those Pis as well. I would have also had to purchase a small 802.3af PoE switch with enough capacity to power all of the Pis. This may be something I reconsider in the future, but at the time, it didn't seem like money well spent since this isn't something I planned on moving once installed. Using the official power supplies, I could rest easy that all of the Pis, including the Pi 4s, would be sufficiently powered.

As far as the case, I wanted a simple stacked cluster case that displayed the Pis nicely and didn't cause any thermal issues. As a consequence of the increased power and performance of the Raspberry Pi 4s, they also generate more heat and are known to throttle the CPU under load. While I considered a case with fans, I was hoping for a silent, air cooled cluster without cheap fans that might eventually start buzzing. Since a recent firmware upgrade was made available to help reduce the heat and power consumption of the Pi 4, I felt confident that an open air case without active cooling should be sufficient. It hasn't been an issue yet, but I may do some testing in another post soon.

For networking, the MikroTik Routerboard was something I already had on my network. Most any 5 port switch would have worked just fine, but I liked the idea of having a managed switch for monitoring purposes and the flexibility to configure VLANs and routing for the cluster if I wanted to. With some Google colored ethernet patch cables added, it all fit nicely under the Pis for a fairly nice looking setup.

Preparing the Nodes

After assembling the cluster case and installing the Raspberry Pis, the next step was to install an operating system and prepare the nodes for running Kubernetes. I found no reason not to use the Debian based Raspberry Pi OS Lite, so that's what I did. Since I primarily use Debian, I thought I'd try out the Ubuntu Raspberry Pi Imager available on the Raspberry Pi downloads page. Although installing the imager.deb package required manually installing some missing dependencies, the Imager application itself was extremely simple to use. I just selected the OS and the SD card and proceeded with writing each of the 4 SD cards I'd be installing in the Pis. I should note that at the time of writing this, a 64-bit version of Raspberry Pi OS was currently in beta. I chose to stick with the 32-bit stable release because I prefer stability over the bleeding edge, and because I only was only using Raspberry Pis with up to 4GBs of RAM.

Image: Raspberry Pi Imager screenshot

Since Raspberry Pis are often used headless, Raspberry Pi OS provides an easy way to enable the SSH server without needing to connect a display. As shown in the steps found here, you simply create a file titled “ssh” in the boot directory on the SD card. I did this after writing each of the SD cards, before ejecting them.

Next, I installed the SD Cards in the Pis and powered only one on. If I powered on all the Pis on at this point, they would have all had the same hostname and with similar MAC addresses, it would have been difficult to tell them apart when connecting via SSH. Instead, I powered them on and individually connected via SSH, changed the hostname using raspi-config, and rebooted each one, before powering on the next.

On my network this was a fairly simple process because I have a pfSense appliance as my router, and I have the DNS server configured to register hostnames from DHCP leases and static mappings. This allowed me to just ssh to each new Pi using "pi@raspberrypi" without knowing the IP address it was assigned from DHCP. If I didn't have this configured on my network, I would have had to do an nmap scan or look at DHCP leases to find the IP address of each new Raspberry Pi.

This is also why I only changed the hostnames and didn't configure any static IP addresses on the Raspberry Pis. It's something I generally do for all the servers on my home network. Instead of statically configuring an IP address on the server itself, I rely on static DHCP mappings and hostname registration in DNS. Using DHCP and DNS in instead of configuring static IPs everywhere makes it much simpler to change an IP address of a server if I ever need to, and makes it so I rarely have to remember IP addresses. I configured static DHCP mappings to reserve a specific IP address for each of the Pis. This wasn't strictly necessary, but it gives me finer control of the lease times and other options given to each Pi and makes the IP addresses predictable.

Here are the simple host names I chose:

  • k3s-master-rpi001
  • k3s-worker-rpi002
  • k3s-worker-rpi003
  • k3s-worker-rpi004

The "k3s" prefix is the version of Kubernetes I plan on using. I'll talk more about this in the next post. The "master" or "worker" designation refers to the role of the node. Since K3s only supports multiple master nodes when using an external database, I'm using only a single master node in my cluster, (which is quite common). Lastly, the "rpi" suffix identifies that the node is a Raspberry Pi and I follow it with a simple numeric identifier that has room for up to 999 Raspberry Pis--hey! one can dream right?

Once I had all the Pis up and running, I logged in to each of them and changed the default pi user password, and expanded the file system by once again using the raspi-config utility. At this stage, many guides out there will have you reduce the amount of RAM allocated for the GPU, disable swap memory, and add some cgroup options to support containers. However, I put off doing these tasks for now because I wanted to build out the cluster incrementally and see what impact each change would have individually.

Image: raspi-config screenshot

Speaking of building incrementally, at this point I thought it would be a good idea to setup Ansible so I could incrementally apply some of the changes I would need to make across all of the nodes. Using a VM which I dedicated for Ansible, I used the command ssh-copy-id pi@ to copy my user’s ssh public key onto each Pi. This allows Ansible to connect to each node as the pi user without a needing a password. I then created a simple inventory file and made sure Ansible could log into the all nodes using an ad-hoc command to gather record some device details.

brian@ansible-vm:~/k3s_ansible$ cat hosts
[k3s_rpis]
k3s-master-rpi001
k3s-worker-rpi[002:004]

brian@ansible-vm:~/k3s_ansible$ ansible -u pi -m shell -a \
> 'uname -a && \
> lsb_release -d && \
> cat /proc/cpuinfo | grep "Revision\|Model" &&\
> cat /proc/meminfo | grep MemTotal && \
> lscpu | grep "Architecture\|Model name\|CPU max MHz"' \
> k3s_rpis
k3s-master-rpi001 | CHANGED | rc=0 >>
Linux k3s-master-rpi001 4.19.118-v7l+ #1311 SMP Mon Apr 27 14:26:42 BST 2020 armv7l GNU/Linux
Description:    Raspbian GNU/Linux 10 (buster)
Revision    : c03112
Model       : Raspberry Pi 4 Model B Rev 1.2
MemTotal:        3999744 kB
Architecture:        armv7l
Model name:          Cortex-A72
CPU max MHz:         1500.0000

k3s-worker-rpi002 | CHANGED | rc=0 >>
Linux k3s-worker-rpi002 4.19.118-v7l+ #1311 SMP Mon Apr 27 14:26:42 BST 2020 armv7l GNU/Linux
Description:    Raspbian GNU/Linux 10 (buster)
Revision    : c03112
Model       : Raspberry Pi 4 Model B Rev 1.2
MemTotal:        3999744 kB
Architecture:        armv7l
Model name:          Cortex-A72
CPU max MHz:         1500.0000

k3s-worker-rpi003 | CHANGED | rc=0 >>
Linux k3s-worker-rpi003 4.19.118-v7+ #1311 SMP Mon Apr 27 14:21:24 BST 2020 armv7l GNU/Linux
Description:    Raspbian GNU/Linux 10 (buster)
Revision    : a02082
Model       : Raspberry Pi 3 Model B Rev 1.2
MemTotal:         948280 kB
Architecture:        armv7l
Model name:          Cortex-A53
CPU max MHz:         1200.0000

k3s-worker-rpi004 | CHANGED | rc=0 >>
Linux k3s-worker-rpi004 4.19.118-v7+ #1311 SMP Mon Apr 27 14:21:24 BST 2020 armv7l GNU/Linux
Description:    Raspbian GNU/Linux 10 (buster)
Revision    : a22082
Model       : Raspberry Pi 3 Model B Rev 1.2
MemTotal:         948280 kB
Architecture:        armv7l
Model name:          Cortex-A53
CPU max MHz:         1200.0000

Since the installation of Raspberry Pi OS Lite is done by directly imaging the SD cards, you don't go through an installer to set localization options like the timezone or NTP servers. I noticed that the timezone was set to Europe/London on all of the Pis. While I didn't expect it to really be an issue, I wanted to ensure that logs would have accurate timestamps. To be certain that the system time and timezone were configured identically across all the nodes, I created and ran the Ansible playbook shown below. I had to set the NTP server manually in the timesyncd configuration file because I found that the NTP server received via DHCP on my network was frustratingly being ignored by timesyncd. I found references to this being a bug in past versions, but it should have been resolved in systemd version 241 running on the Pis. I may revisit this later, but for the time being, I just manually set this in the timesyncd configuration file.

# timesyncd.yaml
---
- hosts: k3s_rpis
  remote_user: pi
  become: yes
  tasks:
    - name: Set NTP configuration in /etc/systemd/timesyncd.conf
      lineinfile:
        path:    /etc/systemd/timesyncd.conf
        line:    'NTP=192.168.2.1'
        create:   yes
        state:    present
      register: timesyncd_conf

    - name: Start systemd-timesyncd, if not started
      service:
        name: systemd-timesyncd
        state: started
      register: timesyncd_started

    - name: Restart systemd-timesyncd, if running and timesyncd.conf changed
      service:
        name: systemd-timesyncd
        state: restarted
      when: timesyncd_started.changed == False and timesyncd_conf.changed

    - name: Enable systemd-timesyncd, if not already enabled
      service:
        name: systemd-timesyncd
        enabled: yes

    - name: Set timezone to America/Los_Angeles
      timezone:
        name: America/Los_Angeles
      register: timezone

    - name: Restart cron if timezone was changed
      service:
        name: cron
        enabled: yes
        state: restarted
      when: timezone.changed

Next, I created and ran another simple playbook to update the apt package list and upgrade the installed packages.

# apt-upgrade.yaml
---
- hosts: k3s_rpis
  remote_user: pi
  become: yes
  tasks:
    - name: Update apt package lists and upgrade
      apt:
        update_cache: yes
        upgrade: safe

Finally, I then rebooted all the nodes, again using an adhoc Ansible command with the reboot module.

brian@ansible-vm:~/k3s_ansible$ ansible -u pi -m reboot -b k3s_rpis
k3s-worker-rpi003 | CHANGED => {
    "changed": true,
    "elapsed": 27,
    "rebooted": true
}
k3s-worker-rpi004 | CHANGED => {
    "changed": true,
    "elapsed": 30,
    "rebooted": true
}
k3s-worker-rpi002 | CHANGED => {
    "changed": true,
    "elapsed": 33,
    "rebooted": true
}
k3s-master-rpi001 | CHANGED => {
    "changed": true,
    "elapsed": 36,
    "rebooted": true
}

Installing Kubernetes

With the basic tasks to prepare the nodes completed, I was finally ready to install Kubernetes and get the cluster up and running. As I briefly mentioned in the last post, I chose to install a lightweight version of Kubernetes developed by Rancher, called K3s. I'll be going over the basics, but you can read more about K3s on their official docs. The main reason I chose K3s, like so many others do for Raspberry Pi clusters, is because it's a lightweight version of Kubernetes that's also supported on ARM CPUs and comes as a single binary that's simpler to manage. Rancher also provides installation and uninstall scripts that make it extremely easy to get started.

The first step was to run the install script provided in the K3s quick start guide on the master. In K3s terminology this is the K3s server node.

pi@k3s-master-rpi001:~ $ curl -sfL https://get.k3s.io | sh -
[INFO]  Finding release for channel stable
[INFO]  Using v1.18.3+k3s1 as release
[INFO]  Downloading hash https://github.com/rancher/k3s/releases/download/v1.18.3+k3s1/sha256sum-arm.txt
[INFO]  Downloading binary https://github.com/rancher/k3s/releases/download/v1.18.3+k3s1/k3s-armhf
[INFO]  Verifying binary download
[INFO]  Installing k3s to /usr/local/bin/k3s
[INFO]  Creating /usr/local/bin/kubectl symlink to k3s
[INFO]  Creating /usr/local/bin/crictl symlink to k3s
[INFO]  Creating /usr/local/bin/ctr symlink to k3s
[INFO]  Creating killall script /usr/local/bin/k3s-killall.sh
[INFO]  Creating uninstall script /usr/local/bin/k3s-uninstall.sh
[INFO]  env: Creating environment file /etc/systemd/system/k3s.service.env
[INFO]  systemd: Creating service file /etc/systemd/system/k3s.service
[INFO]  systemd: Enabling k3s unit
Created symlink /etc/systemd/system/multi-user.target.wants/k3s.service → /etc/systemd/system/k3s.service.
[INFO]  systemd: Starting k3s
pi@k3s-master-rpi001:~ $ 

From the output, we can see that the installation script did a few things in addition to just downloading and installing the k3s binary. First, we see it also created symlinks for kubectl, crictl, and ctr pointing to the k3s binary. It's worth noting because when using these commands locally, they are actually part of the k3s binary and aren't standalone programs. Next, we also see that some utility scripts were created, including the uninstall scripts. Finally, we see that the script also created a systemd unit file for us, enabled it so it runs at boot, and then started it.

K3s also comes with a check-config utility which I ran next to check the installation. It provides quite a lot of output so I've grepped it.

pi@k3s-master-rpi001:~ $ sudo k3s check-config | grep "swap\|System\|Necessary\|Optional\|Network\|missing\|fail"
System:
- /usr/sbin iptables v1.8.2 (nf_tables): should be older than v1.8.0 or in legacy mode (fail)
- swap: should be disabled
Generally Necessary:
Optional Features:
- CONFIG_CGROUP_PERF: missing
- CONFIG_CGROUP_HUGETLB: missing
- CONFIG_CGROUP_NET_PRIO: missing
- CONFIG_CFS_BANDWIDTH: missing
- CONFIG_RT_GROUP_SCHED: missing
- Network Drivers:
      Optional (for encrypted networks):
STATUS: 1 (fail)

In the output, we see that the iptables check had failed. This is because Raspbian Buster (now called Raspberry Pi OS) defaults to using nftables instead of iptables, but K3s requires iptables. This is actually noted in the K3s installation requirements, including the steps to enable legacy iptables. We also see that swap should be disabled and some optional cgroup kernel features and scheduler options are missing.

Before moving on to adding the worker nodes, I created and ran this Ansible playbook to enable legacy iptables on all the nodes.

# enable-legacy-iptables.yaml
---
- hosts: k3s_rpis
  remote_user: pi
  become: yes
  tasks:
    - name: Flush all iptables tables (iptables -F)
      iptables:
        flush: yes
      register: flushed

    - name: Set alternative for iptables as iptables-legacy
      alternatives:
        name: iptables
        path: /usr/sbin/iptables-legacy
      when: flushed.changed

    - name: Set alternative for ip6tables as ip6tables-legacy
      alternatives:
        name: ip6tables
        path: /usr/sbin/ip6tables-legacy
      when: flushed.changed

I then rebooted all the nodes using the same adhoc Ansible command I used in the last post and ran the k3s check-config command again to confirm that I no longer saw the "fail" status. Next, the k3s check-config command indicated that swap "should be disabled." This didn't cause the check to fail, but I wanted to better understand why is should be disabled. After doing some research, I learned it's because that the current QoS policy in Kubernetes assumes that swap is disabled. This is stated in the Kubernetes design documents and proposals as follows:

...If swap is enabled, then resource guarantees (for pods that specify resource requirements) will not hold. For example, suppose 2 guaranteed pods have reached their memory limit. They can continue allocating memory by utilizing disk space. Eventually, if there isn’t enough swap space, processes in the pods might get killed. The node must take into account swap space explicitly for providing deterministic isolation behavior.

In short, this means that Kubernetes has no ability to control swap memory independent of physical memory and this can cause performance problems and odd behavior. While I didn't expect the small 100MiB swap file that ships with Raspberry Pi OS to be used much, rather than wait for some strange memory problem to occur, I thought it would be best to follow the recommendations to disable swap at this point. I did this on all the nodes with the following Ansible playbook:

# disable-dphys-swapfile.yaml
---
- hosts: k3s_rpis
  remote_user: pi
  become: yes
  tasks:
    - name: Turn off swap now
      shell: dphys-swapfile swapoff

    - name: Remove the swapfile
      shell: dphys-swapfile uninstall

    - name: Run "free" command to confirm swap is off
      shell: free -m
      register: free

    - name: Output results of "free" command
      debug: var=free.stdout_lines

    - name: Stop and disable dphys-swapfile.service
      service:
        name: dphys-swapfile.service
        state: stopped
        enabled: no

After confirming that swap remains disabled after a reboot, it was time to address the last items shown missing in the k3s check-config output. Ultimately, since adding the missing kernel features were optional at this point, I decided not to make any further changes unless I found they were necessary later. Adding the missing kernel features would have also meant recompiling the kernel from source, which is not such a trivial change.

With the check-config requirements now sorted out, I again followed the K3s quick start guide and ran the install script with the K3S_URL and K3S_TOKEN environment variables to install K3s on the worker nodes and join them to the cluster. To automate this process on all the worker nodes, I did this using an Ansible playbook. The playbook below first downloads the K3S_TOKEN value from the master node where it is stored at /var/lib/rancher/k3s/server/node-token and then runs the install script on the worker nodes using the token value.

# install-k3s-workers.yaml
---
- hosts: k3s_rpis
  remote_user: pi
  become: yes
  tasks:
    - name: Fetch the master node token
      fetch:
        src: /var/lib/rancher/k3s/server/node-token
        dest: "{{ inventory_hostname_short }}_node-token"
        flat: yes
      when: "'k3s_master' in group_names"

    - name: Download K3s install script on worker nodes
      get_url:
        url: https://get.k3s.io
        dest: ~/getk3s.sh
      when: "'k3s_workers' in group_names"

    - name: Run K3s install script on worker nodes
      shell: K3S_URL=https://k3s-master-rpi001:6443 K3S_TOKEN={{lookup('file', 'k3s-master-rpi001_node-t$
      when: "'k3s_workers' in group_names"

After the playbook ran, I confirmed that all the nodes had joined the cluster by using the kubectl get nodes -o wide command.

pi@k3s-master-rpi001:~ $ sudo kubectl get nodes -o wide
NAME                STATUS   ROLES    AGE    VERSION        INTERNAL-IP     EXTERNAL-IP   OS-IMAGE                         KERNEL-VERSION   CONTAINER-RUNTIME
k3s-master-rpi001   Ready    master   4d9h   v1.18.3+k3s1   192.168.2.111   <none>        Raspbian GNU/Linux 10 (buster)   4.19.118-v7l+    containerd://1.3.3-k3s2
k3s-worker-rpi002   Ready    <none>   43s    v1.18.3+k3s1   192.168.2.112   <none>        Raspbian GNU/Linux 10 (buster)   4.19.118-v7l+    containerd://1.3.3-k3s2
k3s-worker-rpi004   Ready    <none>   38s    v1.18.3+k3s1   192.168.2.114   <none>        Raspbian GNU/Linux 10 (buster)   4.19.118-v7+     containerd://1.3.3-k3s2
k3s-worker-rpi003   Ready    <none>   37s    v1.18.3+k3s1   192.168.2.113   <none>        Raspbian GNU/Linux 10 (buster)   4.19.118-v7+     containerd://1.3.3-k3s2

I then deployed some test pods and scaled it up to 20 replicas to verify that everything was working on all the nodes.

pi@k3s-master-rpi001:~ $ sudo kubectl apply -f https://k8s.io/examples/controllers/nginx-deployment.yaml
deployment.apps/nginx-deployment created
pi@k3s-master-rpi001:~ $ sudo kubectl scale --replicas=20 deployment/nginx-deployment
deployment.apps/nginx-deployment scaled
pi@k3s-master-rpi001:~ $ sudo kubectl get pods -o wide
NAME                                READY   STATUS    RESTARTS   AGE     IP            NODE                NOMINATED NODE   READINESS GATES
nginx-deployment-6b474476c4-lctzp   1/1     Running   0          2m13s   10.42.2.80    k3s-worker-rpi004              
nginx-deployment-6b474476c4-xfqvq   1/1     Running   0          2m13s   10.42.3.224   k3s-worker-rpi003              
nginx-deployment-6b474476c4-5hdq4   1/1     Running   0          2m13s   10.42.3.225   k3s-worker-rpi003              
nginx-deployment-6b474476c4-jdg5t   1/1     Running   0          2m13s   10.42.3.227   k3s-worker-rpi003              
nginx-deployment-6b474476c4-vvltg   1/1     Running   0          2m13s   10.42.3.230   k3s-worker-rpi003              
nginx-deployment-6b474476c4-5zskx   1/1     Running   0          2m13s   10.42.3.229   k3s-worker-rpi003              
nginx-deployment-6b474476c4-tcsdk   1/1     Running   0          2m13s   10.42.3.232   k3s-worker-rpi003              
nginx-deployment-6b474476c4-zsxqm   1/1     Running   0          2m13s   10.42.3.228   k3s-worker-rpi003              
nginx-deployment-6b474476c4-8mls6   1/1     Running   0          2m13s   10.42.3.231   k3s-worker-rpi003              
nginx-deployment-6b474476c4-w7j6z   1/1     Running   0          2m13s   10.42.3.226   k3s-worker-rpi003              
nginx-deployment-6b474476c4-5tfx2   1/1     Running   0          2m13s   10.42.0.82    k3s-master-rpi001              
nginx-deployment-6b474476c4-4nwz7   1/1     Running   0          2m13s   10.42.0.83    k3s-master-rpi001              
nginx-deployment-6b474476c4-f875h   1/1     Running   0          2m13s   10.42.0.84    k3s-master-rpi001              
nginx-deployment-6b474476c4-8v2c6   1/1     Running   0          2m13s   10.42.1.163   k3s-worker-rpi002              
nginx-deployment-6b474476c4-s7jd7   1/1     Running   0          2m13s   10.42.1.162   k3s-worker-rpi002              
nginx-deployment-6b474476c4-rsf46   1/1     Running   0          2m13s   10.42.1.165   k3s-worker-rpi002              
nginx-deployment-6b474476c4-mjg6d   1/1     Running   0          2m13s   10.42.1.161   k3s-worker-rpi002              
nginx-deployment-6b474476c4-97lqw   1/1     Running   0          2m13s   10.42.1.164   k3s-worker-rpi002              
nginx-deployment-6b474476c4-mc59w   1/1     Running   0          2m13s   10.42.1.167   k3s-worker-rpi002              
nginx-deployment-6b474476c4-6j62t   1/1     Running   0          2m13s   10.42.1.166   k3s-worker-rpi002              

That's it! All the pods were successfully scheduled and started on the cluster nodes in just a matter of seconds. I'll be talking more about Kubernetes in greater depth in future posts, but this is a great starting point. Thanks for reading!

- Brian Brookman


Comments

comments powered by Disqus