What is the proper way of making zfs load-key on boot where the encrypted file is located in /root?
I’ve used a similar method for Debian which has worked perfectly but I want to make sure this is still the proper way of doing it in Bluefin.
Background
I am new to Bluefin, I was previously running on Debian for several years. The concept of an atomic OS is still very new to me.
I have bluefin-dx-nvidia-open:stable installed on 2 x 500GB nvme in RAID 0 with encryption (Using the installer’s automatic partitionner) but I also have 4 more drives that I’ve configured in an encrypted Raidz1.
My issue is that while the pool gets imported correctly, the encryption key does not get loaded, which prevents the zfs datasets from being mounted.
Currently, I can run these 2 commands after logging-in to resolve the issue:
sudo zfs load-key -a
sudo zfs mount -a
My goal is to automate this the right way to avoid any future updates of bluefin to override these changes.
I did consider it but I’m not too sure how to implement that either, I guess I would have to setup an automatic mount in fstab or something and have the key in there.
My understanding is that ZFS will import pools and mount them after the primary disk is decrypted. I’ve done further digging and found similar instructions on the ArchWiki to automate the zfs load-key function (Section 6.1.1).
This leads me to believe that ZFS doesn’t attempt to load encryption keys by default, so adding this Systemd seems like the best way.
As far as where to make that change, I rechecked the bluefin documentation and found a section in the FAQ:
Follow the XDG standards for overriding core OS files. The /etc, /var, /usr/local, and /opt directories are writable, and many applications will look here for overrides of the read-only files shipped with the OS image in /usr
So I guess my conclusion is that adding the file in /etc/systemd/system/ would be the appropriate way to override this behavior without changing a part of the system that might get replaced by a future update.
I’ll give it a try later and see if it works or not.
Place the unti file in ~/.config/systemd/user/*. Not the best reference but what I could find quickly. But it shows the resolution order early in the page.
That way the systemd unit files are in $HOME and you could place the mount points somewhere in there too. E.g., ~/mnt or ~/.local/mnt.
In my case, I would modify my ~/.local/bin/backup-home.sh script to include or exclude mount points as needed from the rsync command line.
But the unit files would be in ~/.config/systemd and would be backed up. This, of course, is to simplify the process of reinstall.
I have been working hard on reducing my manual post-install checklist. Approaches like this is doing it for me.
In short, thanks to j0rge, infy, mgiles, etc. who have been helping adjust my mental model of how to manage and use a linux dev workstation, I have reached this simple conclusion.
The OS is there to enable what I do in my home directory. Any change outside of my home directory needs to be highly scrutinized and avoided where possible.