Won't boot into newest image

Could not tell you. That’s beyond my depth at this point.

1 Like

My apologies, I was just about to edit my post to apologize again for seeming patronizing while instructing you to type a bunch of commands that you possibly already knew.

You can figure out which partitions are mounted where ty typing mount with no arguments. It might be a long list, so you could do a mount | grep boot or mount | grep sdc or whatever to filter it.

To see how much disk space is left, I think you can do a df -ha

Just to clarify I’m currently running Bluefin so sdc is already mounted. Does that change anything?

Nope. If you run mount with no arguments, it merely reports what is currently mounted without making any changes. IIRC, it doesn’t even usually require any administrative permissions. We’re not trying to change anything, just to get a measure of what’s mounted where and how much disk remains free.

Screenshot from 2024-10-01 11-43-35

Oh, they were two different possibilities for search. command 1 and command 2 are different commands, though in retrospect I can see how the shading is perhaps a little too subtle for that to be obvious.

Try the second one, on a line by itself. Just type “sudo mount | grep sdc” (without the quotes) on a line by itself followed by the enter key.

edit: followed, of course, by a df -ha also executed on a line by itself with the enter/return key.

Doh! Now I feel stupid and ignorant. I do appreciate your help.

[quote=“harold, post:27, topic:4171”]
Doh! Now I feel stupid and ignorant. [/quote]

Please don’t. Nobody is born knowing this stuff. And, again, there’s no guarantee that anything we’re looking at is going to solve your problem. So I’ll have my own turn to feel ignorant. LOL.

Anyway, the good news is that it does look like you have a dedicated /boot partition and that it is over 80% filled. IDK how large each additional snapshot/kernel is, but it doesn’t seem at all inconceivable that your failure is literally from running out of space. And that’s even with the generous (by normal standards) 1GB partitions you made for your EFI and boot partitions.

Unless I miss my guess, the proper fix is to make the boot partition much larger. A few extra GB out of your 1,000 GB SSD is so tiny as to be unnoticeable, but could probably eliminate the hassle of having to constantly prune restore points entirely. There are a few ways to go about doing this, but unless you’re adding an additional disk most of them revolve around shrinking/growing/moving existing partitions. And that, unfortunately, means a risk of data loss. You’ll probably be OK, but if you proceed I strongly recommend you at least back up important files and ideally back everything up.

Then, boot from something that doesn’t mount this SSD (you mentioned having kubuntu) and run gparted after ensuring the drive isn’t mounted. From there, you can shrink your root partition (sdc3) leaving space at its front and then grow /boot (sdc2) into the newly unallocated space. I’d probably shift another 9GB because I wouldn’t want to ever have to futz with it again and you’ve got waaaaaayyyyy more than enough disk to spare at present.

gl

1 Like

I’ve cloned the Bluefin drive and booted into Kubuntu. Gparted isn’t allowing me to resize any of the partitions even though the drive isn’t mounted.

Not a lot to go on, but I guess I’d recommend trying to boot from a gparted livecd (in UEFI) and try it that way. If it doesn’t work, please report what “doesn’t work” means. Error, options not available, etc.

gl

A tool I use for many drive tools I need is Rescuezilla. Gparted is included along with imaging backup and restores. You can make a drive image before modifying your partitions. If something doesn’t work out just restore prior image.

https://github.com/rescuezilla/rescuezilla

1 Like

I have Rescuezilla on a Ventoy drive. I’ll give it a try in the next day or so. Thanks

1 Like

I had cloned my Bluefin Nvidia GTS system over to an nvme drive Tuesday evening. Yesterday morning I booted into Kubuntu in order to adjust the partition containing the Bluefin images, but was unable to do so. I booted back into Bluefin. Mysteriously Bluefin updated somehow yesterday although automatic updates had been turned off. I booted this morning and grub came up which it had been previously. I ran rpm-ostree status and saw that it was on the newest image. After unpinning the oldest image from 9-7 I ran cleanup and it worked. It worked maybe a little too well. It removed all the images except for the newest and running image.

Looks like my update, grub, and cleanup issues have been resolved at least for the moment. Needless to say, I’m not shutting or rebooting until after the next update.

2 Likes

Wait, do you have Bluefin on a custom partition instead of the whole disk?

Egads! What an ordeal. And after all that, you’re still kind of walking on eggshells with a presumably undersized /boot partition that will eventually overfill and fail with the same ostree errors.

1 Like

No partitions were changed. The default install partitions were cloned to a new drive with the intention of adjusting the size of the partition containing the images in gparted. I wasn’t able to make those adjustments though.

1 Like

do you have Bluefin on a custom partition instead of the whole disk?

Does whatever scheme you recommend not normally include a /boot partition of finite size? I feel like @inffy spotted the truth of the thing instantly from the relatively straight-forward error message:

Disk is full as the error states “No space left on device”

Dunno, someone should probably test in a VM, pin a bunch of old snapshots and see if the default boot partition should perhaps be increased?

1 Like

1 Like

I have a fresh Bluefin install on my framework.

❯ lsblk 
NAME                                       MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINTS
zram0                                      252:0    0     8G  0 disk  [SWAP]
nvme0n1                                    259:0    0 953,9G  0 disk  
├─nvme0n1p1                                259:1    0   600M  0 part  /boot/efi
├─nvme0n1p2                                259:2    0     1G  0 part  /boot
└─nvme0n1p3                                259:3    0 952,3G  0 part  
  └─luks-28319e95-21ca-4e15-940f-98172173029e
                                           253:0    0 952,3G  0 crypt /usr/bin/swtpm
                                                                      /var/home
                                                                      /var
                                                                      /sysroot/ostree/deploy/default/var
                                                                      /usr
                                                                      /etc
                                                                      /
                                                                      /sysroot

This is a the default partition layout (with encryption enabled).

EDIT: That layout should be more than enough, unless you pin a lot of deployments

1 Like