Can I add a second SSD containing an existing /home partition to an already-installed Aurora without losing all the data?
If all you are wanting to do is access the data on the disk, you can just mount it like a normal disk and the directory structure will remain. It won’t collide with your existing /home because it will be mounted on something like /mnt/newssd/(directory structure on ssd). If you mean you want to add the user from the home partition to your Aurora install, maybe you could symlink it but someone else should chime in if that is what you want, I am not sure how that would work.
“Anything is possible given enough time and money.” -former boss
But why would you want to do that? I am truly curious. I have not run into this use case before. Remote, network mounted home dirs - sure. But, those are a really bad idea.
You will end up with a non-standard install that you will fight with from then on…
Anyway, here are a few options I came up with off the top of my head …
Option 1: unwind the btrfs definition of /var/home and then redefine it to mount the associated btrfs volume from the 2nd SSD instead.
This is somewhat of an advanced approach.
And if you are attempting to do this from within a VM, it is probably not what you want to attempt.
Make sure you backup the current /etc/fstab
first!
Option 2: create another mount point - say /var/home2
, and the /etc/fstab
entry to mount to it from the 2nd SSD. Then change your users home dir to the /var/home2
location using sudo usermod -d /var/home2/user_id ...
. It would be best to do that logged in as another user. And then reboot. Note make sure to NOT use the -m
switch to usermod
. Look at man usermod
- it should be obvious why you would not want to move the home dir.
You will also need to make sure that the UID:GID for the 2 installs are identical. And you may need to consider selinux attribute differences (if any are applicable).
Make sure you backup the current /etc/fstab
first!
But because you are not touching the original btrfs volume (currently mounted as /var/home
) you should be able to rollback to it if need be.
Again, if in a VM be aware of the additional complexity involved in working with physical drives from within the VM.
Option 3: Or, copy the data using backup/restore (one option). I would create another user first in case the user’s home dir gets borked in the process.
And if you use brew
, don’t forget about /var/home/linuxbrew
. You will need to decide what to do with it.
Good luck. But be careful.
I have an entire SSD as /home on one (now dead) PC.
I have a working machine with Aurora, and I would like to add the SSD to that and make it my /var/home partition. I have the same user and the same flatpaks on both machines so I hope the whole thing will not be too traumatic.
So you are saying that the SSD contains just the /home dir? Formatted as ext4?
EDIT: it occurred to me that, of course, you will need to create a new user whose home dir is somewhere else (e.g., /var/home2/
) first before you dismantle /var/home in the next step.
If so, then it should just be a matter of modifying the /etc/fstab
file to mount that SSD to /var/home.
EDIT: once this is done and working you should be able to delete the new user and the /var/home2
dir.
There are some other things that will needed to be done to delete the existing /home subvol and reclaim the space. But, …
If you are looking to do a impromptu backup / restore procedure I have a different response prepared with some example scripts.
This should be simple because you already have a drive that is a backup of your /var/home
dir.
It worked, thanks.
Out of curiosity, what did you have in mind? It may come handy sooner or later
Since you solved your issue - here is the trimmed down reminder about backups.
If it were me I would just restore my backup which has been automated for years. With bluefin my post-install checklist has reduced dramatically.
I hope you will take the time to get a backup in place for your $HOME dir. It takes away the fear of upgrades / re-installs completely.
The Universal Blue images all come with backup software installed.
But, I have learned over the decades to dislike GUI backup software. I have been bitten too many times by a misunderstanding of over-simplified screen design, poor wording of check boxes, etc.
I use the rsync command coded in scripts explicitly so I am in direct control of exactly what is happening.
I have 4 scripts in ~/.local/bin
:
backup-home.sh
restore-home.sh
backup-home-ext.sh
restore-home-ext.sh
I have created a temporary repo containing samples of the scripts mentioned above.
You will want to tailor them for your installation.
You can use man rsync
to gain understanding of the options to rsync being used.
With trusted (redundant - NAS + external drive) backups, and a shortened post-install checklist I can reinstall and be back where I was in an hour or so.
If you don’t have a NAS, I built one using a Raspberry Pi 4B+, a 2TB drive running OMV on Raspberry Pi OS. It cost <$100.
There you go. Hope that helps.
You’re a deity among men. Great job. I’ll definitely follow your advices.
Nah, just retired with a lot of experience and time
And don’t forget that you still have that /home subvol occupying space on your main btrfs partition.
You can see all the subvols with:
sudo btrfs subvolume list /
I have not dealt with removal / resizing of subvolumes yet. Perhaps you can repurpose the /home subvolume and mount it somewhere else?
That would be the least risky thing to do if it makes sense.
You could also use Pika, which is included with Bluefin, and Aurora. Pika allows you to (quoting):
- Create backups locally and remotely
- Set a schedule for regular backups
- Save time and disk space because Pika Backup does not need to copy known data again
- Encrypt your backups
- List created archives and browse through their contents
- Recover files or folders via your file browser
I use a different program called Vorta because it’s what I’m used to.
Both use BorgBackup as their backend.
Yeah, those are options that I loosely mentioned.
But after being bit bad because a team member misconfigured a customer’s backup on SCO Unix back in the 90s with a similar (albeit terminal based) backup program, I have learned to avoid them.
Backups are too critical and you will not know there is a problem (especially with exclusion rules, etc.) until you need to rely on the backup.
Instead, this is an area where everyone should dig deep and understand how they work and be setup. rsync is very efficient; I haven’t needed anything else for over 20 years.
As a matter of fact I just used it yesterday to reliably copy a bluefin ISO onto my Ventoy USB thumb drive.
In short, depending on the design of the app GUI backup programs can be helpful. But they should NOT be used as an excuse to stay uninformed about backup strategy best practices or how the underlying command line they call performs its work.
Be careful.
In my experience it is best to take the time to code the command line yourself - explicitly.