But if you’re removing a disk such that you can migrate the data away from
the zpool into that disk and then plan on destroying the zpool anyway, then
read on.
In my case. I wanted to switch this particular zpool over to be a simple mergerfs volume, and copy the data from the zpool into it. But I didn’t have any additional (big enough) disks laying around, so incrementally removing disks from the zpool and adding them to a mergerfs while copying the data from one to the other was the approach I used.
Up until 1.20, k3s shipped with Traefik 1.7.x built in. Since 1.21, Traefik
2.4.x has been the bundled version. If you upgraded your k3s deployment from
<=1.20 to >=1.21 k3s will do nothing if it detects Traefik 1 installed. So it’s
up to the adminstrator to upgrade it. Here’s how I did it in my cluster with
some basic examples.
I recently converted my Docker Swarm-based homelab to one powered by Kubernetes.
My original goals with setting up Swarm were to fully leverage all the compute
and memory available on my two-node setup. Ultimately though, because of
shared storage and other complications with hardware devices, I ended up
manually scheduling containers onto specific nodes, which totally defeated the
purpose.
k9s in action
Plus Kubernetes is cooler 😎.
The process took about a week of a few hours at a time to get set up. I wanted
to write about it here because otherwise I’ll forget everything.