Boot/ partition is running low on space

Hi folks

I’ve been using Bluefin Linux for a few weeks now and I’m really impressed. However, I recently started seeing a notification after logging in that my /boot/ partition is running low on space, with only 26MB remaining. This seems odd because my entire SSD is 500GB and has plenty of free space overall.

While I’m comfortable manually resizing the /boot/ partition if that’s the recommended solution, I wanted to double-check before tinkering. Is there a more elegant way to perhaps prune or clean up unnecessary files within /boot/?

I’m wondering if my daily update routine might be contributing to this issue. Since I want to keep everything fresh, I tend to run updates scripts quite frequently. Could this be causing the issue?

any advice from the Bluefin community would be greatly appreciated.

I’ve never seen this happen before, did you do some kind of custom partitioning or did you do the automatic partitioning?

Recommended size for /boot is 1024 MB.
Read more here: Installing Fedora Silverblue :: Fedora Docs

For EFI is 350 to 600 MB, which is more than enough. What is surprising is seeing your /boot/EFI running so low. It shouldn’t. Could it probably related to this?

Could you elaborate? What scripts are you running ? could you provide the results of

tree /boot post it here in </> preformatted text.

yep automatic partitioning.

no scripts, just the ujust devmode. here we go tree /boot

 1 /boot                                                                   
 2 ├── boot -> .                                                           
 3 ├── efi  [error opening dir]                                            
 4 ├── grub2  [error opening dir]                                          
 5 ├── loader -> loader.0                                                  
 6 ├── loader.0                                                            
 7 │   ├── entries                                                         
 8 │   │   ├── ostree-1.conf                                               
 9 │   │   ├── ostree-2.conf                                               
10 │   │   ├── ostree-3.conf                                               
11 │   │   └── ostree-4.conf                                               
12 │   └── grub.cfg                                                        
13 ├── lost+found  [error opening dir]                                     
14 └── ostree                                                              
15     ├── default-0d561005936a80b561d1c9bacbd8144f5f2c4823a4f1b3eefada9ac3
16     │   ├── initramfs-6.8.7-200.fc39.x86_64.img                         
17     │   └── vmlinuz-6.8.7-200.fc39.x86_64                               
18     ├── default-305845502f094b1cb0af30cc3c8482257a90e168c3d263816739893a
19     │   ├── initramfs-6.8.7-200.fc39.x86_64.img                         
20     │   └── vmlinuz-6.8.7-200.fc39.x86_64                               
21     ├── default-4e0400c71ff05e8047e0de9e0c443d82179e2568b807dc58871990fc
22     │   ├── initramfs-6.8.7-200.fc39.x86_64.img                         
23     │   └── vmlinuz-6.8.7-200.fc39.x86_64                               
24     └── default-f007e8fc9220cfd030dc94716cabb2ffcf08d654b7fe70187124b27a
25         ├── initramfs-6.8.7-200.fc39.x86_64.img                         
26         └── vmlinuz-6.8.7-200.fc39.x86_64                               
28 13 directories, 13 files                                                
Downloads/boot.txt (1,1) | ft:unknown | unix | utf-8Alt-g: bindings, Ctrl-g

I faced the same issue when testing a build from 39 to 40 with the auto-sized boot partition (1024 MB).

I have since reverted, but seem to recall an error I saw that was related to kargs. Do we store them on that partition when updating?

arenas any chance you could share how you reverted the changes? i’m still running into the same /boot space issue. worried i might have to reinstall everything if i run out of space there

Can you add the output of rpm-ostree status?

I use a Ventoy boot USB with a RescueZilla ISO on it for backup and recovery purposes. I restored my system to an earlier backup through RescueZilla, which reinstated the state of all drive partitions, but then further rebased my system to include customizations, so I was back in the same situation.

Following the suggestion from @j0rge, I checked the output of rpm-ostree status and noticed I had accumulated extra images from versions I had previously pinned. To free up space, I opted to remove these old versions. Instead of manually deleting them from /boot/ostree or using the rpm-ostree cleanup command, I chose a simpler GUI approach by deleting them through Cockpit.

I thought about going the partition resize route using GParted on the Ventoy boot USB, but am hesitant to get into a Grub boot up hell-cycle.

Resizing /boot from Gparted works fine. I’ve done it couple of times. Resizing is not a problem but moving it is. Now, I size /boot at 2048 MB in new Silverblue installations. The problem I detected with a small boot is that the OS -in my case Bluefin-DX- doesn’t warn you when there is no space when it’s updating; it simply does nothing, and you keep unaware of that. That why is a good idea running rpm-ostree status after each update to ensure it has been performed. Another issue with small /boot partition is that it will force to delete all pinned OS versions until you get only one pinned, before it accepts the new update. This is not at all attributable to Bluefin-DX but to Fedora Silverblue itself.

thanks folks r the help with the /boot space issue! managed to fix it today using Cockpit. turns out I had 2 pinned versions taking up way too much space. unpinned, deleted them, and all good now.

1 Like

There was no concrete answer here, but I cobbled together what everyone said. I spent 30 min wandering around Silverblue documentation and these forums for this solution:

Run rpm-ostree status and see what images you have:

❯ rpm-ostree status
State: idle
AutomaticUpdates: stage; rpm-ostreed-automatic.timer: last run 7h ago
● ostree-image-signed:docker://
                   Digest: sha256:1ab513c88dea2aacb07c9405bf9e1a0b7400d2788352218e70413063fbb3c66b
                  Version: 40.20240702.0 (2024-07-02T20:02:06Z)
            LocalPackages: veracrypt-1.26.7-1.x86_64

                   Digest: sha256:3a86261872d8d7d67c9b6cd494c57c186d9bc27626aeaa978a954babd70be2ed
                  Version: 40.20240702.0 (2024-07-02T05:58:47Z)
            LocalPackages: veracrypt-1.26.7-1.x86_64

                   Digest: sha256:702787f98ecb8e4b296a2e5f332b09edd07d8bed2ce9ca8f6f9fbee1f80341e9
                  Version: 40.20240627.0 (2024-06-27T15:13:20Z)
            LocalPackages: veracrypt-1.26.7-1.x86_64
                   Pinned: yes

                   Digest: sha256:fca1d3d6a0c6eebee160c2d83223fcbb073b7e214b647a975d023938d57c4b89
                  Version: 39.20240529.0 (2024-05-30T17:04:18Z)
            LocalPackages: veracrypt-1.26.7-1.x86_64
                   Pinned: yes

Each image starts counting at 0 (for me 0-3). I have 2 pinned images, images 2 and 3.

To delete pinned images, you need to unpin them first.

sudo ostree admin pin --unpin 3
sudo ostree admin pin --unpin 2

For the life of me, I have no clue why pin --unpin is a thing. It’s not intuitive at all.

Use ostree to clean it up.

rpm-ostree cleanup

You’ll get at output that looks like this.

Transaction complete; bootconfig swap: yes; bootversion: boot.1.1, deployment count change: -3
Pruned images: 1 (layers: 188)
Freed: 11.9 GB (pkgcache branches: 0)

Also, the boot partition being 600 MB is because of the installer, not because of user error. This needs to be addressed because it’s unreasonable to expect someone to repartition their drive or reinstall.

Either there needs to be a ujust tool to clean up or manage images OR there needs to be a task run in the background and when this happened, the user is prompted to clean them up in the GUI tool.

Unfortunately, this seems like an upstream problem, not so much a uBlue problem. Thanks for reading my doctorate thesis.

Do we know if there’s an upstream issue on this?

I could not find an open issue, but I opened one on the Silverblue issue tracker if you are interested: Not enough memory allocated to the default boot partition · Issue #580 · fedora-silverblue/issue-tracker · GitHub

1 Like

I had the same problem but in my case after unoinning 2 deployment the command ‘rpm-ostree cleanup’ didn’t work, i had to do ‘rpm-ostree cleanup -r’

The Nvidia initramfs has all of the nvidia module inside of it.

While we compress it with zstd, it still is on the larger side.

Fedora defaults to an EFI partition of 600 MB and w Boot partition of 1 GB. For nvidia, that means you can only fit 4 deployments in boot. Amd/Intel I believe it’s 7.

Considering that fedora workstation defaults to 2 kernels, 4 deployments is already more than that supports by default.

But this is something that definitely should be documented more.