Thanks for your amazing work!
I would like to report some problems that I am experiencing when trying to upgrade to / use the new f42-based version.
Please note: I really don’t want to bombard you with a lot of errors, I know the current f42 state (latest
stream) is still unstable and not yet recommended for end-users, I am just thinking that it might be useful for you to see these, to be able to fix unexpected compatibility issues on edge-case systems.
Context: I was already on aurora-dx:latest
, and it was working properly, but after it got bumped to f42, I’m seeing some issues.
Errors when processing the new image:
/proc/self/fd/26/usr/etc/selinux/targeted/contexts/files/file_contexts.bin: line 1 error due to: Non-ASCII characters found
/proc/self/fd/26/usr/etc/selinux/targeted/contexts/files/file_contexts.homedirs.bin: line 1 error due to: Non-ASCII characters found
(...)
Full context:
> sudo bootc update
layers already present: 36; layers needed: 36 (2.5 GB)
Fetched layers: 2.35 GiB in 7 minutes (5.74 MiB/s) /proc/self/fd/26/usr/etc/selinux/targeted/contexts/files/file_contexts.bin: line 1 error due to: Non-ASCII characters found
/proc/self/fd/26/usr/etc/selinux/targeted/contexts/files/file_contexts.homedirs.bin: line 1 error due to: Non-ASCII characters found
⠂ Deploying /proc/self/fd/20/usr/etc/selinux/targeted/contexts/files/file_contexts.bin: line 1 error due to: Non-ASCII characters found
/proc/self/fd/20/usr/etc/selinux/targeted/contexts/files/file_contexts.homedirs.bin: line 1 error due to: Non-ASCII characters found
(bootc:3845): GLib-CRITICAL **: 22:43:20.318: g_atomic_ref_count_dec: assertion 'old_value > 0' failed
⠂ Deploying
(bootc:3845): GLib-CRITICAL **: 22:43:20.333: g_atomic_ref_count_dec: assertion 'old_value > 0' failed
(bootc:3845): GLib-CRITICAL **: 22:43:20.382: g_atomic_ref_count_dec: assertion 'old_value > 0' failed
⠴ Deploying
(bootc:3845): GLib-CRITICAL **: 22:43:20.853: g_atomic_ref_count_dec: assertion 'old_value > 0' failed
(bootc:3845): GLib-CRITICAL **: 22:43:20.855: g_atomic_ref_count_dec: assertion 'old_value > 0' failed
(bootc:3845): GLib-CRITICAL **: 22:43:20.855: g_atomic_ref_count_dec: assertion 'old_value > 0' failed
(bootc:3845): GLib-CRITICAL **: 22:43:20.860: g_atomic_ref_count_dec: assertion 'old_value > 0' failed
(bootc:3845): GLib-CRITICAL **: 22:43:20.876: g_atomic_ref_count_dec: assertion 'old_value > 0' failed
(bootc:3845): GLib-CRITICAL **: 22:43:20.877: g_atomic_ref_count_dec: assertion 'old_value > 0' failed
(bootc:3845): GLib-CRITICAL **: 22:43:20.882: g_atomic_ref_count_dec: assertion 'old_value > 0' failed
/proc/self/fd/20/etc/selinux/targeted/contexts/files/file_contexts.bin: line 1 error due to: Non-ASCII characters found
/proc/self/fd/20/etc/selinux/targeted/contexts/files/file_contexts.homedirs.bin: line 1 error due to: Non-ASCII characters found
Deploying: done (24 seconds) Pruned images: 0 (layers: 0, objsize: 571.0 MB)
Queued for next boot: ostree-image-signed:docker://ghcr.io/ublue-os/aurora-dx:latest
Version: latest-42.20250416.4
Digest: sha256:df95f21f65e931c097a939d843bed14f65cf1027eebf14eb21adca974284867c
Total new layers: 72 Size: 5.4 GB
Removed layers: 71 Size: 5.6 GB
Added layers: 70 Size: 5.3 GB
Hang at black screen after SDDM login
After SDDM login, before the Aurora logo the startup hangs at a black screen for around 30-50 seconds. Flatpak installation also fails, network seems to be not ready. I’m not sure what is causing the hang / black screen, but I’m suspecting either selinux, failed ublue flatpak manager install or service race condition issues (see later).
Let me list below all errors from the journal which might be interesting to look into.
Apr 16 23:40:31 hostname systemd[1]: systemd-remount-fs.service: Main process exited, code=exited, status=1/FAILURE
Apr 16 23:40:31 hostname systemd[1]: systemd-remount-fs.service: Failed with result 'exit-code'.
Apr 16 23:40:31 hostname systemd[1]: Failed to start systemd-remount-fs.service - Remount Root and Kernel File Systems.
Apr 16 23:40:31 hostname systemd-remount-fs[1123]: /usr/bin/mount for / exited with exit status 32.
Apr 16 23:40:31 hostname systemd-remount-fs[1126]: mount: /: fsconfig system call failed: overlay: No changes allowed in reconfigure.
Apr 16 23:40:31 hostname systemd-remount-fs[1126]: dmesg(1) may have more information after failed mount system call.
Apr 16 23:40:37 hostname systemd[1]: check-sb-key.service: Main process exited, code=exited, status=1/FAILURE
Apr 16 23:40:37 hostname systemd[1]: check-sb-key.service: Failed with result 'exit-code'.
SELinux errors: Blocked access for bootupctl
, ostree
, tuned-ppd
, brew
. Especially ostree
could be worrisome. Please expand the following:
SELinux error logs (click to expand)
Apr 16 23:40:41 hostname SetroubleshootPrivileged.py[2306]: failed to retrieve rpm info for path '/etc/selinux/targeted/active/modules/100/bootupd':
Apr 16 23:40:41 hostname setroubleshoot[2265]: SELinux is preventing bootupctl from execute access on the file ostree. For complete SELinux messages run: sealert -l 5aceb5b3-9897-4e84-9caa-046fd4d8adcf
Apr 16 23:40:41 hostname setroubleshoot[2265]: SELinux is preventing bootupctl from execute access on the file ostree.
***** Plugin catchall (100. confidence) suggests **************************
If you believe that bootupctl should be allowed execute access on the ostree file by default.
Then you should report this as a bug.
You can generate a local policy module to allow this access.
Do
allow this access for now by executing:
# ausearch -c 'bootupctl' --raw | audit2allow -M my-bootupctl
# semodule -X 300 -i my-bootupctl.pp
Apr 16 23:40:41 hostname setroubleshoot[2265]: SELinux is preventing bootupctl from 'read, open' accesses on the file /usr/bin/ostree. For complete SELinux messages run: sealert -l a38f44ef-40d2-460c-a983-8d5c5b06fea2
Apr 16 23:40:41 hostname setroubleshoot[2265]: SELinux is preventing bootupctl from 'read, open' accesses on the file /usr/bin/ostree.
***** Plugin catchall (100. confidence) suggests **************************
If you believe that bootupctl should be allowed read open access on the ostree file by default.
Then you should report this as a bug.
You can generate a local policy module to allow this access.
Do
allow this access for now by executing:
# ausearch -c 'bootupctl' --raw | audit2allow -M my-bootupctl
# semodule -X 300 -i my-bootupctl.pp
Apr 16 23:40:41 hostname setroubleshoot[2265]: SELinux is preventing bootupctl from execute_no_trans access on the file /usr/bin/ostree. For complete SELinux messages run: sealert -l 8ea46a09-cfe0-479a-96e8-cfaece304a62
Apr 16 23:40:41 hostname setroubleshoot[2265]: SELinux is preventing bootupctl from execute_no_trans access on the file /usr/bin/ostree.
***** Plugin catchall (100. confidence) suggests **************************
If you believe that bootupctl should be allowed execute_no_trans access on the ostree file by default.
Then you should report this as a bug.
You can generate a local policy module to allow this access.
Do
allow this access for now by executing:
# ausearch -c 'bootupctl' --raw | audit2allow -M my-bootupctl
# semodule -X 300 -i my-bootupctl.pp
Apr 16 23:40:41 hostname setroubleshoot[2265]: SELinux is preventing ostree from map access on the file /usr/bin/ostree. For complete SELinux messages run: sealert -l 680250f2-9fe1-4dee-b964-de36191cd820
Apr 16 23:40:41 hostname setroubleshoot[2265]: SELinux is preventing ostree from map access on the file /usr/bin/ostree.
***** Plugin catchall_boolean (89.3 confidence) suggests ******************
If you want to allow domain to can mmap files
Then you must tell SELinux about this by enabling the 'domain_can_mmap_files' boolean.
Do
setsebool -P domain_can_mmap_files 1
***** Plugin catchall (11.6 confidence) suggests **************************
If you believe that ostree should be allowed map access on the ostree file by default.
Then you should report this as a bug.
You can generate a local policy module to allow this access.
Do
allow this access for now by executing:
# ausearch -c 'ostree' --raw | audit2allow -M my-ostree
# semodule -X 300 -i my-ostree.pp
Apr 16 23:40:41 hostname setroubleshoot[2265]: SELinux is preventing ostree from write access on the directory objects. For complete SELinux messages run: sealert -l 79977b28-d768-4659-a66d-6b2af7d014d5
Apr 16 23:40:41 hostname sddm-helper-start-wayland[2236]: "QSGContext::initialize: depth buffer support missing, expect rendering errors\nQSGContext::initialize: stencil buffer support missing, expect rendering errors\n"
Apr 16 23:40:41 hostname setroubleshoot[2265]: SELinux is preventing ostree from write access on the directory objects.
***** Plugin catchall_boolean (89.3 confidence) suggests ******************
If you want to allow daemons to dump core
Then you must tell SELinux about this by enabling the 'daemons_dump_core' boolean.
Do
setsebool -P daemons_dump_core 1
***** Plugin catchall (11.6 confidence) suggests **************************
If you believe that ostree should be allowed write access on the objects directory by default.
Then you should report this as a bug.
You can generate a local policy module to allow this access.
Do
allow this access for now by executing:
# ausearch -c 'ostree' --raw | audit2allow -M my-ostree
# semodule -X 300 -i my-ostree.pp
Apr 16 23:40:41 hostname setroubleshoot[2265]: failed to retrieve rpm info for path '/run/ostree-booted':
Apr 16 23:40:41 hostname setroubleshoot[2265]: SELinux is preventing ostree from getattr access on the file /run/ostree-booted. For complete SELinux messages run: sealert -l b6d6536a-8285-4582-958b-6964c9441072
Apr 16 23:40:41 hostname setroubleshoot[2265]: SELinux is preventing ostree from getattr access on the file /run/ostree-booted.
***** Plugin catchall (100. confidence) suggests **************************
If you believe that ostree should be allowed getattr access on the ostree-booted file by default.
Then you should report this as a bug.
You can generate a local policy module to allow this access.
Do
allow this access for now by executing:
# ausearch -c 'ostree' --raw | audit2allow -M my-ostree
# semodule -X 300 -i my-ostree.pp
Apr 16 23:40:41 hostname setroubleshoot[2265]: SELinux is preventing ostree from remount access on the filesystem . For complete SELinux messages run: sealert -l ecd744ce-2882-4319-9682-beae8e237be9
Apr 16 23:40:41 hostname setroubleshoot[2265]: SELinux is preventing ostree from remount access on the filesystem .
***** Plugin catchall (100. confidence) suggests **************************
If you believe that ostree should be allowed remount access on the filesystem by default.
Then you should report this as a bug.
You can generate a local policy module to allow this access.
Do
allow this access for now by executing:
# ausearch -c 'ostree' --raw | audit2allow -M my-ostree
# semodule -X 300 -i my-ostree.pp
Apr 16 23:40:41 hostname ModemManager[2012]: <msg> [base-manager] couldn't check support for device '/sys/devices/pci0000:00/0000:00:02.2/0000:01:00.0': not supported by any plugin
Apr 16 23:40:41 hostname SetroubleshootPrivileged.py[2306]: failed to retrieve rpm info for path '/etc/selinux/targeted/active/modules/100/tuned':
Apr 16 23:40:41 hostname setroubleshoot[2265]: SELinux is preventing tuned-ppd from write access on the file ppd_base_profile. For complete SELinux messages run: sealert -l 36673cd4-d692-4dbc-a03d-c96c31294559
Apr 16 23:40:41 hostname setroubleshoot[2265]: SELinux is preventing tuned-ppd from write access on the file ppd_base_profile.
***** Plugin catchall (100. confidence) suggests **************************
If you believe that tuned-ppd should be allowed write access on the ppd_base_profile file by default.
Then you should report this as a bug.
You can generate a local policy module to allow this access.
Do
allow this access for now by executing:
# ausearch -c 'tuned-ppd' --raw | audit2allow -M my-tunedppd
# semodule -X 300 -i my-tunedppd.pp
Apr 16 23:40:41 hostname setroubleshoot[2265]: failed to retrieve rpm info for path '/sys/firmware/acpi':
Apr 16 23:40:41 hostname systemd[2196]: Starting xdg-permission-store.service - sandboxed app permission store...
Apr 16 23:40:41 hostname setroubleshoot[2265]: SELinux is preventing tuned-ppd from watch_reads access on the directory /sys/firmware/acpi. For complete SELinux messages run: sealert -l 8801a64b-747b-404e-900b-ccc8bfdcdf9a
Apr 16 23:40:41 hostname setroubleshoot[2265]: SELinux is preventing tuned-ppd from watch_reads access on the directory /sys/firmware/acpi.
***** Plugin catchall (100. confidence) suggests **************************
If you believe that tuned-ppd should be allowed watch_reads access on the acpi directory by default.
Then you should report this as a bug.
You can generate a local policy module to allow this access.
Do
allow this access for now by executing:
# ausearch -c 'tuned-ppd' --raw | audit2allow -M my-tunedppd
# semodule -X 300 -i my-tunedppd.pp
Apr 16 23:41:39 hostname SetroubleshootPrivileged.py[3486]: failed to retrieve rpm info for path '/etc/selinux/targeted/active/modules/100/tuned':
Apr 16 23:41:39 hostname setroubleshoot[3476]: SELinux is preventing tuned-ppd from write access on the file ppd_base_profile. For complete SELinux messages run: sealert -l 36673cd4-d692-4dbc-a03d-c96c31294559
Apr 16 23:41:39 hostname setroubleshoot[3476]: SELinux is preventing tuned-ppd from write access on the file ppd_base_profile.
***** Plugin catchall (100. confidence) suggests **************************
If you believe that tuned-ppd should be allowed write access on the ppd_base_profile file by default.
Then you should report this as a bug.
You can generate a local policy module to allow this access.
Do
allow this access for now by executing:
# ausearch -c 'tuned-ppd' --raw | audit2allow -M my-tunedppd
# semodule -X 300 -i my-tunedppd.pp
Apr 16 23:41:39 hostname setroubleshoot[3476]: SELinux is preventing tuned-ppd from write access on the file /etc/tuned/ppd_base_profile. For complete SELinux messages run: sealert -l 36673cd4-d692-4dbc-a03d-c96c31294559
Apr 16 23:41:39 hostname setroubleshoot[3476]: SELinux is preventing tuned-ppd from write access on the file /etc/tuned/ppd_base_profile.
***** Plugin catchall (100. confidence) suggests **************************
If you believe that tuned-ppd should be allowed write access on the ppd_base_profile file by default.
Then you should report this as a bug.
You can generate a local policy module to allow this access.
Do
allow this access for now by executing:
# ausearch -c 'tuned-ppd' --raw | audit2allow -M my-tunedppd
# semodule -X 300 -i my-tunedppd.pp
Apr 17 00:09:49 hostname SetroubleshootPrivileged.py[5799]: failed to retrieve rpm info for path '/etc/selinux/targeted/active/modules/100/init':
Apr 17 00:09:49 hostname setroubleshoot[5791]: SELinux is preventing (brew) from read access on the lnk_file brew. For complete SELinux messages run: sealert -l 92a850fa-05f9-4f87-bf8b-cc84e364ec41
Apr 17 00:09:49 hostname setroubleshoot[5791]: SELinux is preventing (brew) from read access on the lnk_file brew.
***** Plugin catchall (100. confidence) suggests **************************
If you believe that (brew) should be allowed read access on the brew lnk_file by default.
Then you should report this as a bug.
You can generate a local policy module to allow this access.
Do
allow this access for now by executing:
# ausearch -c '(brew)' --raw | audit2allow -M my-brew
# semodule -X 300 -i my-brew.pp
Flatpak is unable to resolve hostname (I suspect network is not yet ready):
Apr 16 23:41:12 hostname ublue-flatpak-manager[2599]: Looking for matches…
Apr 16 23:41:12 hostname ublue-flatpak-manager[2599]: F: An error was encountered searching remote ‘flathub’ for ‘app/sh.loft.devpod/x86_64/stable’: Unable to load summary from remote flathub: While fetching https://dl.flathub.org/repo/summary.idx: [6] Could not resolve hostname
Apr 16 23:41:12 hostname ublue-flatpak-manager[2599]: error: No remote refs found for ‘app/sh.loft.devpod/x86_64/stable’
I don’t have ethernet connection, and the WIFI connection came up a bit later, only after the desktop already appeared. The WIFI secret is encrypted in kwallet, so it can only be accessed after successful login. Unfortunately the ublue flatpak manager script does not wait for this to finish.
Nevertheless, I saw the “Welcome to Aurora! New flatpaks have been installed” popup message, even if the installation failed…
More details here:
Apr 16 23:41:12 hostname NetworkManager[2031]: <info> [1744839672.4583] device (wlp1s0): state change: prepare -> config (reason 'none', managed-type: 'full')
Apr 16 23:41:12 hostname NetworkManager[2031]: <info> [1744839672.4590] device (wlp1s0): Activation: (wifi) access point 'wifinetworkname' has security, but secrets are required.
Apr 16 23:41:12 hostname NetworkManager[2031]: <info> [1744839672.4590] device (wlp1s0): state change: config -> need-auth (reason 'none', managed-type: 'full')
Apr 16 23:41:12 hostname NetworkManager[2031]: <warn> [1744839672.4600] device (wlp1s0): no secrets: No agents were available for this request.
Apr 16 23:41:12 hostname NetworkManager[2031]: <info> [1744839672.4600] device (wlp1s0): state change: need-auth -> failed (reason 'no-secrets', managed-type: 'full')
Apr 16 23:41:12 hostname NetworkManager[2031]: <info> [1744839672.4602] manager: NetworkManager state is now DISCONNECTED
(...)
Apr 16 23:41:12 hostname systemd[1]: Starting systemd-hostnamed.service - Hostname Service...
Apr 16 23:41:12 hostname ublue-flatpak-manager[2599]: Looking for matches…
Apr 16 23:41:12 hostname ublue-flatpak-manager[2599]: F: An error was encountered searching remote ‘flathub’ for ‘app/sh.loft.devpod/x86_64/stable’: Unable to load summary from remote flathub: While fetching https://dl.flathub.org/repo/summary.idx: [6] Could not resolve hostname
Apr 16 23:41:12 hostname ublue-flatpak-manager[2599]: error: No remote refs found for ‘app/sh.loft.devpod/x86_64/stable’
(...)
Apr 16 23:41:56 hostname NetworkManager[2031]: <info> [1744839716.4897] device (wlp1s0): state change: prepare -> config (reason 'none', managed-type: 'full')
Apr 16 23:41:56 hostname NetworkManager[2031]: <info> [1744839716.4907] device (wlp1s0): Activation: (wifi) access point 'wifinetworkname' has security, but secrets are required.
Apr 16 23:41:56 hostname NetworkManager[2031]: <info> [1744839716.4908] device (wlp1s0): state change: config -> need-auth (reason 'none', managed-type: 'full')
Apr 16 23:41:56 hostname NetworkManager[2031]: <info> [1744839716.5578] device (wlp1s0): supplicant interface state: inactive -> scanning
Apr 16 23:41:56 hostname NetworkManager[2031]: <info> [1744839716.5579] device (p2p-dev-wlp1s0): supplicant management interface state: inactive -> scanning
(...)
Apr 16 23:41:56 hostname systemd[2477]: Finished plasma-ksplash.service - Splash screen shown during boot.
Apr 16 23:41:56 hostname systemd[2477]: plasma-ksplash.service: Consumed 1.599s CPU time, 44.2M memory peak.
Apr 16 23:41:57 hostname ublue-flatpak-manager[2533]: Flatpak manager v2 has already ran. Exiting...
Apr 16 23:41:57 hostname systemd[2477]: Finished ublue-flatpak-manager.service - Manage flatpaks.
Apr 16 23:41:57 hostname systemd[2477]: Reached target default.target - Main User Target.
(...wifi connects a few seconds after this...)
Apr 16 23:42:00 hostname NetworkManager[2031]: <info> [1744839720.1366] device (p2p-dev-wlp1s0): supplicant management interface state: 4way_handshake -> completed
Marking boot as successful fails (was already the case on f41 too):
Apr 16 23:43:42 hostname systemd[2477]: Starting grub-boot-success.service - Mark boot as successful...
Apr 16 23:43:42 hostname grub2-set-bootflag[3933]: Error canonicalizing /boot/grub2/grubenv filename: No such file or directory
Apr 16 23:43:42 hostname systemd[2477]: grub-boot-success.service: Main process exited, code=exited, status=1/FAILURE
Apr 16 23:43:42 hostname systemd[2477]: grub-boot-success.service: Failed with result 'exit-code'.
Apr 16 23:43:42 hostname systemd[2477]: Failed to start grub-boot-success.service - Mark boot as successful.
I hope these would help you making the system more stable,
Thank you.