Harverster is a new open source Hyperconverged Infrastructure (HCI) from Rancher. It's built on Kubernetes and uses Kubevirt to run KVM virtual machines inside Kubernetes Pods. It also inlcudes Rancher's Longhorn distributed storage system which provides shared block storage to all VMs which enables live migration of VMs out of the box, with zero configuration.
Coming from a background in more traditional hypervisors, seeing that you could use Kubernetes to schedule VMs across nodes was a lightbulb moment. Instead of balancing VMs accross servers manually, Kubernetes can dynamically schedule them. Using Kuberentes, you can even define VMs as manifests and manage them via GitOps. That combined with having Longhorn built-in got me thinking seriously about replacing Proxmox VE with Harverster in my homelab.
First impressions
I was already considering buying a few more small PCs to upgrade my homelab anyways so Harvester was the perfect excuse for some new hardware. I bought 5 refurbished Lenovo M910q small form factor PCs with i7-7700T CPUs, 32GB of DDR4 RAM, and 512GB SATA SSDs.
After getting the nodes cabled up, I got to work installing Harvester. I downloaded the latest version at the time of writing, which was 1.0.3. The installation via USB image was straightforward and easy to follow. After the installation completed on the first node, I started by exploring the web interface.
The web interface
The web interface was modern and polished, and performed reasonably well on my lab hardware. I did find it a little less mobile friendly than I would expect from such a modern web interface, but that's barely worth mentioning. It's still better than most, certainly better than Proxmox's web interface on mobile.
One great feature is the ability to download or edit confiugrations as YAML directly in the web interface. While most of the Kubernetes API objects are abstracted out of the Web interface, having the ability to work with the underlying YAML when needed is a nice feature.
One complaint I did have though is there was no way to modify or view the status of the host network interfaces. If you need to check whether an interface in a bond is down for example, or view other settings configured during the installation, your only option is to use the CLI. I could see where you might not want some of the setting editable, but I feel checking interface status, or even understanding how many physical network interfaces are installed seems basic. Maybe that's just the network engineer in me talking though.
Resource expectations
After installing Harvester on the first node, I was initially a little suprised to see it using 9GB of memory with no workloads running. I knew my hardware wasn't server-grade and barely met the minimum spec requirements, but I was hoping for a little more headroom in my edge use-case. After installing it on the remaining nodes; however, I noticed that not all of the nodes were using the same amount of resources. The first three nodes run the Kubernetes control-plane and other components that aren't running on every node. Of course, Harvester comes bundled with a lot more features, so the resource usage might be justified.
Provisioning VMs
Next, I created a few VMs to test out the scheduling and live migration. Defining a new VM was straightforward and similar to what I'm used to in Proxmox (which makes some sense since Proxmox also uses KVM). However, Proxmox does give you more advanced options when selecting the virtual hardware and defining a VM. Unless you're doing something specific though, you probably won't miss anything. PCI passthrough is already on the roadmap for Harvester 1.1.0 as it's recently been added to Kubevirt. One unexpected problem I did run into after deploying a new VM based with a larger image file, it seemed like the VM was stuck not starting. Viewing the events for the Kubernetes pod, it was failing to attach the disk. It took a little time, but eventually it started without intervention. I suspected this was due to Longhorn replication in some way. Now that I had harvester running with a worload scheduled though, the next thing to do was of course to try and break it.
While you are able to create custom scheduling rules when defining a VM (using Kubernetes labels and selectors), I selected the default, "Run on any node". After creating a test VM, I cloned it using the "multiple instances" option which allowed me to create up to 10 VMs instantly. The first VMs were scheduled on my two non-control-plane nodes first. This made sense since they had the least amount of resources currently utilized. After repeating the process a few more times, I had 30 VMs provisioned. After the 25th, however, there weren't enough unreserved resources to schedule a new VM, so it just stayed pending. I was pleased that nothing broke and the web UI was at least still responding okay.
Later after reading through the documentation, I learned that VMs have a default overcommit value which allows you to schedule more resources than you actually have. VM instances are also scheduled based on a score assigned to each node. No complaints from me, scheduling worked as expected, nothing broke catastrophically, and I didn't have to do any mental math on where to put the VMs.
Next I tested the live migration capabilities. I've never had shared storage for VMs and live migration capability in my homelab so this was a feature I was very excited about. Harverster (using Kubernetes) allows you to Cordon off a node to prevent scheduling on it, or put it in Maintenance mode which evicts all of the VMs on the node and live migrates them to another node in the cluster. It worked flawlessly and there were no errors or other issues. However, I did notice about 20-30 seconds of connectivity loss during the live migration. I expect that other hypervisors are better when it comes to live migration, but I wouldn't call it a deal breaker by any means. Your mileage might vary with better hardware and networking.
Advanced use-cases
There were a few more advanced features I wanted to explore before I made a final decision.
Deploying VMs with GitOps
While not explicitly called out in the Harvester homepage or documentation, since all the resources defined in Harvester are Kubernetes API objects, theoretically it should be able to deploy them using manifests--or even better using GitOps.
To test this out as simply as possible, I used Flux CD and just added the YAML from an existing VM to my repository.
If I was deploying a brand new VM that didn't already exist, I'm sure it wouldn't be such a simple task. I'd also have to define other related resources like the PVC the VM would use, and would need to go through a lot of trial an error to find which fields I needed to include. For now though, I just wanted to see if I could sucessfully deploy a change to an existing resource using GitOps.
As a test, I decided to reduce the VM's memory from 8Gi to 4Gi. One thing to note when modifying CPU or Memory in this way is that Harvester is no longer controlling overcommit of resources. This is actually mentioned in the Harvester docs:
If you need to configure individual virtual machines without involving global configuration, consider adjusting the spec.template.spec.domain.resources.
value on the target VirtualMachine resource individually. Note that by modifying these values, you are taking over control of virtual machine resource management from Harvester.
For this reason, I also adjusted the memory request to match what Harvester would do by default. Here's the diff of the changes I made:
$ git diff c341333 14d8dd4
diff --git a/harvester/vm-test-01.yaml b/harvester/vm-test-01.yaml
index 8bfe5a1..c2b135f 100644
--- a/harvester/vm-test-01.yaml
+++ b/harvester/vm-test-01.yaml
@@ -82,14 +82,14 @@ spec:
machine:
type: q35
memory:
- guest: 8092Mi
+ guest: 4096Mi
resources:
limits:
cpu: "4"
- memory: 8Gi
+ memory: 4Gi
requests:
cpu: 250m
- memory: 5461Mi
+ memory: 2731Mi # memory / 1.5 (default memory overcommit ratio)
evictionStrategy: LiveMigrate
networks:
- multus:
A short time after I pushed the changes to my repository, I saw the change reflected in Harvester's web interface!
While testing this out, there were a few lessons learned worth mentioning though. There are some sections included of the a VirtualMachine definition that are clearly "read-only". These were the "managedFields", and "status" sections. When editing a VM's config via YAML in the web interface, these sections are actually pre-collapsed, implying they shouldn't be changed. Knowing this, I omitted these sections entirely when first committing the manifest for the VM into my repository. However, I found there were two other keys I didn't realize needed to be omitted. These were the "resourceVersion" and "uid" keys. While my changes were deployed succesfully, I noticed that Flux was suddenly unable to reconcile what was just applied. The reason was the "resourceVersion" and "uid" values had changed after my changes had been made. In fact, even restarting a VM causes both these values to change. After removing these keys from the file, Flux reconciled successfully and all was well. Hopefully as Harvester matures, there will be more documentation added on managing resources via manifests in this way. For now, there might be a bit of trial and error to determine what should be included.
If you're interested, my test repository including the changes I made is available at https://github.com/bcbrookman/harvester-gitops-test.
Leveraging Harvester in guest K8s clusters
One of the big concerns I had when initially considering Harvester was would it be feasible to run a distributed shared storage system like Longhorn inside a guest K8s cluster, on Longhorn backed virtual disks? It just sounds like a horrible idea. Fortunately, the Harvester developers thought of this too. Harvester comes with a cloud provider and CSI Driver that allows you to utilize a cloud load-balancer, and Longhorn on the host Harvester cluster from within guest RKE1/2 and K3s clusters.
The harvester documentation has these features nested under the Rancher integration section since this is probably going to be the way most people would deploy a an K8s cluster in Harvester. However, I'm sure it's possible to use it without Rancher if you wanted to.
Unfortunately, I didn't actually get to the point of testing out these features, but it's definitely nice to know they're available for the future.
Conclusion
After a lot of back and forth, I ultimately decided to continue using Proxmox VE in my homelab. It was a hard decision to make because I really really wanted to do make the switch to Harvester. The built-in shared storage using attached drives, live migration, and the ability to define VMs in K8s manifests and deploy them using GitOps very nearly justified the higher resource utilization. That's not to say that Harvester is terribly inefficient or uses too many resources, it just didn't make sense for my homelab.
The truth is that I don't really need all of my VMs to be backed by shared storage. Most of the services I deploy will be running in K8s clusters on VMs where it'd probably be better to just use Longhorn or another external storage provider as needed. Having everything backed by Longhorn just to enable live migration of VMs would really just mean more replication traffic and wear and tear on my SSDs than would truly be necessary.
I'm sure I'll revist this decision later as Harvester continues to mature though. I'll also be watching a few GitHub pull requests and issues to see where they land. I would especially love to see more configurable installation options for edge use-cases like mine, and USB and PCI pass-through support which is already on the roadmap. For now though, I think Proxmox is still going to serve my current homelab needs best. If you run VM workloads and have the compute resources and network to spare though, Harvester might be a great fit for your homelab. Like always, it just depends.
Comments
comments powered by Disqus