Kinoite-nvidia question

On 2025-05-09, I posted here to ask a question about Kinoite, and I was told that Kinoite is maintained by Fedora and that I was in the wrong forum. However, when I asked at the Fedora Atomic Desktop forums (where I was allowed to post my rpm-ostree status output), someone pointed out that I am running ublue-os kinoite-nvidia (Kinoite with the proprietary Nvidia driver included) rather than fedora x86_64 kinotite. This makes sense as I need supergfxctl so that I can switch mode to Vfio and supergfxctl is not included with Fedora Kinoite.

Anyway, if this is still not the correct place to ask this question, I ask for both your understanding and guidance as to where the proper place to ask might be.

My new question is - Why have updates stopped? I currently find myself running on a Fedora 41 system. Running

$ rpm-ostree upgrade

results in no errors and the final message

No upgrade available.

It appears I am using the ā€œlatestā€ tag on the docker link. Again, I’d love to post the rpm-ostree status output so that you might see the docker URLs for yourself, but this forum won’t currently let me (I guess because I am a new user).

I checked github and ā€œlatestā€ tag for kinoite-nvidia at this writing points at tag 42-20250513.2. My current system is 41.20250403.0, per rpm-ostree status.

I’ve run into update problems on my personal Bazzite systems before where I had too many systems pinned for the size of the /boot partition, but this does not appear to be the case here. Running a df -h shows that I have 452 MB available on /boot. Only my current system is pinned.

I’ve also tried running

$ sudo rpm-ostree cleanup -m
$ sudo rpm-ostree refresh-md

but this does not change the behavior when I try to update.

My system is running fine but I’m using this for work, so I really need to get the updates flowing again. Any suggestions please?

I just noticed I was upgraded to ā€œBasicā€ so here is my rpm-ostree status output.

State: idle
AutomaticUpdates: stage; rpm-ostreed-automatic.timer: last run 5h 10min ago
Deployments:
fedora:fedora/41/x86_64/kinoite
Version: 41.20250510.0 (2025-05-10T01:22:37Z)
BaseCommit: 6f556c9fd9ae39ad7c6d8f07b0f45eccf2c96b3ac50f48401df840f71e7d03ec
GPGSignature: Valid signature by 466CF2D8B60BC3057AA9453ED0622462E99D6AD1
Diff: 428 upgraded, 11 downgraded, 218 removed, 24 added
LayeredPackages: containerd.io docker-buildx-plugin docker-ce docker-ce-cli docker-compose-plugin guestfs-tools libvirt-daemon-config-network
libvirt-daemon-kvm libvirt-nss python3-libguestfs qemu-kvm virt-install virt-manager virt-top virt-viewer
LocalPackages: veracrypt-1.26.20-1.x86_64 virtio-win-0.1.266-1.noarch

ā— ostree-image-signed:docker://ghcr.io/ublue-os/kinoite-nvidia:latest
Digest: sha256:f304f7ec8614537b5b1ef6121cdad6c2c720a9c33738ddd2c2d30a2d58a71b78
Version: 41.20250403.0 (2025-04-04T04:10:40Z)
LayeredPackages: containerd.io docker-buildx-plugin docker-ce docker-ce-cli docker-compose-plugin guestfs-tools libvirt-daemon-config-network
libvirt-daemon-kvm libvirt-nss python3-libguestfs qemu-kvm virt-install virt-manager virt-top virt-viewer
LocalPackages: veracrypt-1.26.20-1.x86_64 virtio-win-0.1.266-1.noarch
Pinned: yes

You can see that yesterday I tried rebasing to Fedora Kinoite. It seemed to go fine, but I was unable to fully test it due to the lack of supergfxctl. I actually built supergfxctl in a Toolbox container but I couldn’t figure out how to get the service to run on the host system.

I’m fine with staying on the ublue-os version provided I can get updates working again.

Tough to tell but I would start with an rpm-ostree reset to get rid of this mess first and then try upgrading.

I could ditch docker but to my knowledge, layering is the only viable way to install qemu and libvirt. I should point out that this is a working system that has been receiving updates normally since January/February (under kinoite-nvidia) and since November 2024 (before I rebased from Bazzite to kinoite-nvidia) without no apparent issues.

Below is the output from the last time I ran rpm-ostree upgrade earlier today.

note: automatic updates (stage) are enabled
2 metadata, 0 content objects fetched; 788 B transferred in 1 seconds; 0 bytes content written
Checking out tree 6f556c9… done
Enabled rpm-md repositories: copr:copr.fedorainfracloud.org:ublue-os:akmods updates fedora google-chrome rpmfusion-nonfree-nvidia-driver rpmfusion-nonfree-steam copr:copr.fedorainfracloud.org:phracek:PyCharm docker-ce-stable updates-archive
Updating metadata for ā€˜copr:copr.fedorainfracloud.org:ublue-os:akmods’… done
Importing rpm-md… done
rpm-md repo ā€˜copr:copr.fedorainfracloud.org:ublue-os:akmods’; generated: 2025-05-13T04:06:31Z solvables: 139
rpm-md repo ā€˜updates’ (cached); generated: 2025-05-13T01:59:59Z solvables: 27148
rpm-md repo ā€˜fedora’ (cached); generated: 2024-10-24T13:55:59Z solvables: 76624
rpm-md repo ā€˜google-chrome’ (cached); generated: 2025-05-13T11:00:00Z solvables: 4
rpm-md repo ā€˜rpmfusion-nonfree-nvidia-driver’ (cached); generated: 2025-05-02T23:55:33Z solvables: 17
rpm-md repo ā€˜rpmfusion-nonfree-steam’ (cached); generated: 2025-04-18T08:19:16Z solvables: 1
rpm-md repo ā€˜copr:copr.fedorainfracloud.org:phracek:PyCharm’ (cached); generated: 2025-05-07T06:46:06Z solvables: 7
rpm-md repo ā€˜docker-ce-stable’ (cached); generated: 2025-04-18T10:46:28Z solvables: 70
rpm-md repo ā€˜updates-archive’ (cached); generated: 2025-05-13T02:38:49Z solvables: 52343
Resolving dependencies… done
No upgrade available.

[Edited to fix typo]

You could add all that stuff back after you’ve done a successful upgrade to F42.

Haha true enough. I’m just trying to avoid that if possible. It was a chunk of work and I feel lucky I got a working system out of it. Well, working until I developed this update problem.

Potentially significant info I just noticed: On the github page (assuming I’m looking in the right place), there’s a banner at the top that says ā€œThis repository was archived by the owner on Apr 27, 2025. It is now read-only.ā€ Relevant?

The package is still published just not in that repo (which we archived): Package kinoite-nvidia Ā· GitHub

Sorry, I’m not sure I understand. Are you suggesting that I need to rebase to change my remote ref? The link you posted is the same one that I posted in my previous comment.

Yeah sorry, github’s rewriting of the url is annoying, but that’s the package you should be on. After a reset run an upgrade, and you should be on that image.

Found another potentially relevant item. The URL line under [remote ā€œdefaultā€] in my /sysroot/ostree/repo/config references ā€œbazzite-nvidia-open-stableā€. As stated previously in this thread, this was a initially a Bazzite system before rebasing to kinoite-nvidia. As for the path, I don’t have an install under /run or /var/run so I’m guessing this line is not about my system but the remote system.

$ cat /sysroot/ostree/repo/config
[core]
repo_version=1
mode=bare

[remote ā€œdefaultā€]
url=/run/install/repo/bazzite-nvidia-open-stable
gpg-verify=false

[sysroot]
readonly=true
bootloader=none

I really do appreciate your suggestion and interacting with me, but having to reinstall qemu and libvirt for every upgrade might make atomic desktop non-viable for my use case.

Perhaps could you suggest a way that I might prove that the layered packages are interfering in the upgrade? I’m a little shocked that there wouldn’t be additional errors in the update output or failing that, the journal if this was indeed the case.

No clue how to diagnose this, you’re running an unsupported configuration, you’d have to dig into the service logs and find out what the root cause is.

You’re using a base image that we use for generating our end user images, something like Aurora DX is probably a better fit.

1 Like

Oh! I wish you’d led with that. I wasn’t looking to end up in this situation but rather sort of fell ass first into it. :smiley:

I’ll see if I can rebase after lunch. Thanks again for your time.

Everything works! Thank you, j0rge. :heart:

State: idle
AutomaticUpdates: stage; rpm-ostreed-automatic.timer: no runs since boot
Deployments:
ā— ostree-image-signed:docker://ghcr.io/ublue-os/aurora-dx-nvidia-open:stable
Digest: sha256:e69162188b836a513174c9720acc7db95ef2ae0c654b3d762d1253145c811e7b
Version: 41.20250511 (2025-05-11T06:35:20Z)
LayeredPackages: python3-libguestfs virt-top
LocalPackages: veracrypt-1.26.20-1.x86_64 virtio-win-0.1.266-1.noarch
Pinned: yes

ostree-unverified-registry:ghcr.io/ublue-os/aurora-dx-nvidia-open:stable
Digest: sha256:e69162188b836a513174c9720acc7db95ef2ae0c654b3d762d1253145c811e7b
Version: 41.20250511 (2025-05-11T06:35:20Z)
LayeredPackages: python3-libguestfs virt-top
LocalPackages: veracrypt-1.26.20-1.x86_64 virtio-win-0.1.266-1.noarch

ostree-image-signed:docker://ghcr.io/ublue-os/kinoite-nvidia:latest
Digest: sha256:f304f7ec8614537b5b1ef6121cdad6c2c720a9c33738ddd2c2d30a2d58a71b78
Version: 41.20250403.0 (2025-04-04T04:10:40Z)
LayeredPackages: containerd.io docker-buildx-plugin docker-ce docker-ce-cli docker-compose-plugin guestfs-tools libvirt-daemon-config-network
libvirt-daemon-kvm libvirt-nss python3-libguestfs qemu-kvm virt-install virt-manager virt-top virt-viewer
LocalPackages: veracrypt-1.26.20-1.x86_64 virtio-win-0.1.266-1.noarch
Pinned: yes

This is a red herring and is an artifact from your original install.

Would you expand on this comment please? What is the red herring?

Ostree lists the original install location in the config. It’s a fake clue

Ah I understand. Tyvm.

As I’m sure you know, F42 went wide yesterday for :stable and it was staged on my box when I got on this morning. Rebooted et voila! Updates. :smiley: :+1:

State: idle
AutomaticUpdates: stage; rpm-ostreed-automatic.timer: no runs since boot
Deployments:
ā— ostree-image-signed:docker://ghcr.io/ublue-os/aurora-dx-nvidia-open:stable
Digest: sha256:cb0de9e6928c2259fe4e32b62275b11eccae727ec072d3eebca5713254feaa34
Version: 42.20250514.6 (2025-05-14T18:55:31Z)
LayeredPackages: python3-libguestfs virt-top
LocalPackages: veracrypt-1.26.20-1.x86_64 virtio-win-0.1.266-1.noarch

ostree-image-signed:docker://ghcr.io/ublue-os/aurora-dx-nvidia-open:stable
Digest: sha256:e69162188b836a513174c9720acc7db95ef2ae0c654b3d762d1253145c811e7b
Version: 41.20250511 (2025-05-11T06:35:20Z)
LayeredPackages: python3-libguestfs virt-top
LocalPackages: veracrypt-1.26.20-1.x86_64 virtio-win-0.1.266-1.noarch
Pinned: yes

ostree-image-signed:docker://ghcr.io/ublue-os/kinoite-nvidia:latest
Digest: sha256:f304f7ec8614537b5b1ef6121cdad6c2c720a9c33738ddd2c2d30a2d58a71b78
Version: 41.20250403.0 (2025-04-04T04:10:40Z)
LayeredPackages: containerd.io docker-buildx-plugin docker-ce docker-ce-cli docker-compose-plugin guestfs-tools libvirt-daemon-config-network
libvirt-daemon-kvm libvirt-nss python3-libguestfs qemu-kvm virt-install virt-manager virt-top virt-viewer
LocalPackages: veracrypt-1.26.20-1.x86_64 virtio-win-0.1.266-1.noarch
Pinned: yes

1 Like