Setting up Debian (Host OS)
Increasing default open file and watch limit
Why this is nessecary?
Kubernetes environments frequently encounter file handle exhaustion because multiple processes running under the same user need to open numerous files simultaneously and use filesystem watchers (fswatch/inotify) to monitor configuration changes, logs, and resources.
The default Debian limits (~1024 file handles, low inotify watchers) are insufficient for Kubernetes clusters where container runtimes, API components, and applications can easily exceed these quotas.
These increased limits prevent "too many open files" errors and ensure proper filesystem monitoring for Kubernetes controllers and logging systems.
Add the following content to the config files
* soft nofile 1048576
* hard nofile 1048576
root soft nofile 1048576
root hard nofile 1048576
fs.inotify.max_user_instances = 1280
fs.inotify.max_user_watches = 10028400
Not sure if I need to modify the following:
# Below [manager] block!
DefaultLimitNOFILE=1048576
# Below [manager] block!
DefaultLimitNOFILE=1048576
Increase ZFS Arc Size
By default Arc size is quite small for my system. I'm on 128 GB and would like to have at least half of that for ZFS.
# Set Max ARC size => 80 == 85899345920 Bytes
options zfs zfs_arc_max=85899345920
# Set Min ARC size => 64GB == 68719476736
options zfs zfs_arc_min=68719476736
After modifying this file make sure to regenerate the initramfs.
$ update-initramfs -u -k all
Setup ZFS Key Autoload for Full Root Encryption
I'm using ZFSBootMenu with full root encryption. This will protect against leaking secrets or data that is stored within Kubernetes etcd Database.
All my ZFS datasets are encrypted with a different key. In order to autoload that key we have to set up a systemd service that loads them.
[Unit]
Description=Load ZFS encryption keys
DefaultDependencies=no
After=zfs-import.target
Before=zfs-mount.service
[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/usr/sbin/zfs load-key -a
StandardInput=tty-force
[Install]
WantedBy=zfs-mount.service
chmod 644 /etc/systemd/system/zfs-load-keys.service
systemctl daemon-reload
systemctl enable zfs-load-keys.service
systemctl status zfs-load-keys.service
Setting up Kubernetes
Install the cluster
k0sctl apply
k0sctl kubeconfig > ~/.kube/config
Restore Sealed Secret key
(Restore sealed-secret.yaml from Backup)
k apply -f sealed-secret.yaml
Install ArgoCD
cd ops/argocd
k create ns ops
k kustomize --enable-helm | k apply -f -
From now on everything else will be installed / setup by ArgoCD