I don’t want to promise that people can rebase, because even that’s highly conditional on how a user chose to install CoreOS, if they used the default disk layout, filesystem, secureboot, etc.
I do want people to be able to rebase, if only for selfish reasons, I’d like to do the same.
The gotchas I know about already:
CentOS currently does not have a signed shim for secure boot, so that would have to be disabled for any user who uses it. This IS being worked on, and it seems it may be getting close.
CentOS’ stock kernel does not include several filesystems which are part of Fedora’s kernel, most notably, btrfs, but also ntfs, and I haven’t checked the extensive list.
CentOS’ stock kernel does not include some hardware support by default (mostly older RAID/SAS controllers) which Fedora does.
Expected workarounds for the above:
For secure boot, we’ll have to wait.
For the kernel issues, I’d like to do a rebuild of the stock kernel, bu with some of these missing things re-enabled. I don’t expect that to be available immediately, but Universal Blue already builds a kernel for Bazzite, we can certainly repurpose some of that tooling for this.
Gotcha. Indeed if the new thing you’re planning Is CentOS Stream based then there’s going to be quite a few differences.
I’m going to take a stab at making a custom image derived from a Fedora bootc image. I expect rebasing to that from uCore will be less of a jump, but I’ve got a lot to learn on this topic.
Or do like @pauldo and making a Fedora-based bootc custom image in the meantime, which should be possible to rebase to the new uCore. That’s what I’m planning to do as well.
I really prefer the bootc approach. It seems to me to be a lot easier to test. I tried the tooling, it’s great, and it’s already installed in Aurora. (Of course! )
I do think there’s a value in using the CentOS kABI stable kernel, however.
After things are moving along, it probably does make sense to have a secondary, more modern kernel option, but CentOS 10 is on kernel 6.12 right now… This is a good starting point.
But, anyone that is considering rebasing IS a technical person. Anyone else would be wondering what you are talking about.
So, if there is any doubt as to whether rebase would work - just reinstall!
As a technical person you do have:
a tested backup regimen and post-install checklist / script
are not layering ANYTHING except those things covered in your checklist
are disciplined enough to not rely on what is installed in the host OS (it is just there to run the various kinds of containers in use)
avoid modifying ANYTHING outside of $HOME except what is covered in your checklist
I know there are exceptions to those points. But if you (as a technical person) are not doing everything you can to reduce the friction to re-install… well, …
Shame on you.
I adopted many of these things with Fedora WS because I kept kicking myself for avoiding upgrading - I, too, felt the friction to upgrade.
So I adopted a policy to ONLY backup my $HOME dir with many excluded dirs; and developed a complete pre-, during- and post-install checklist. And have tested it many times over the years. And I am disciplined to keep it up to date!
Backups happen to my NAS via a few scripts that I treat like gold. I have corresponding restore scripts for re-installs. None of that is automated. I have to intentionally build the habit of running backups as significant things happen on / to my desktop.
I am convinced that if you will adopt these things - you, too, can avoid the “can I rebase” thought process. For me, especially with bootc images, it is MUCH simpler and less risky to just reinstall.
I feel like what I want out of ublue for something like this is the easiest way to setup a secure homelab/jellyfin/nextcloud/sso stack with as minimal effort as possible. This is already somewhat possible on NixOs with Nixarr, but I think there is a lot ublue could do in this space that would be very good.
I question if centos-stream is going to meaningfully produce less churn than fedora…I don’t doubt that the churn will be less, but will it be that much less?
The entire reason why most of us moved away from centos for bare-metal servers was exactly because of this reason…its fundamentally a staging ground for next release of RHEL so there are lots of regular updates happening…beyond just the churn, centos often has package conflicts due to one package updating and the dependency not yet being updated.
I think best options are either: almalinux-bootc or perhaps a better approach is to simply use fedora:latest-1 (fedora:41 in the case).
The previous version of fedora continues to get bug fixes until the next full release…so in our case we would simply run ucore based on fedora:41 which would then upgrade to fedora:42 once 43 is released. This would give us the “best of both worlds” - less churn yet still getting critical bug fixes.
I think trying to base a server project around centos isn’t going to work out like you envision…
Regardless of what containerized apps you may want to run, you’ve basically described what I wanted out of uCore! CoreOS was chosen as a base because it 1) function a lot like our Universal Blue desktop systems under the hood 2) was a containers first server platform. We still have the same goals with the server project replacing uCore.
That said, I never felt it was in scope for uCore (or Universal Blue) to create the “complete list” of containers you could run. But a new project was just shared which we may incorporate which is making it even easier to deploy and manage containerized apps. Introducing orches: a git-ops tool for podman
CentOS Stream has significantly less churn than Fedora. Just at a superficial level, Fedora publishes updates every night, where as CentOS publishes updates once a week. The type of updates are also quite different: Fedora maintainers regularly rebase packages to new upstream versions, but CentOS must follow the RHEL Application Compatibility Guide (ACG), which restricts which types of updates are allowed. Sometimes these are version rebases, but more often than not they are your traditional backported security/bug fix patches.
To clarify, it’s a preview of the next minor version of the same major version of RHEL. CentOS Stream 9 currently has RHEL 9.7 content, and CentOS Stream 10 currently has RHEL 10.1 content. Anyone who has used RHEL for some time knows that it doesn’t change that much between minor versions, largely because of the aforementioned ACG rules. To put it another way, it’s the major version branch of RHEL that the RHEL minor versions are cut from. The key difference is that updates are published in CentOS once they pass QA, rather than being batched up (deferred) into large minor version updates. So it’s the same overall pace of updates, just delivered in a smooth arc instead of large batches.
Do you have any examples of this? It shouldn’t be happening at all for packages within the distro, and if it is then we should file bug reports and help improve the pipeline and tests to prevent it from happening. If you’re talking about packages outside the distro, that will be up to the third party providing those packages to account for. That’s why EPEL 10 now targets specific minor versions, including CentOS Stream 10 as the leading minor version.
I believe it’s a priority for the Universal Blue maintainers to work directly with the bootc developers, and those devs are working in Fedora and CentOS, not in projects further downstream like Alma. I also believe they would prefer to have the most current versions of bootc, as the project is relatively young and moving pretty quickly (currently Alma has 1.1.6, CentOS has 1.3.0).
The problem with fedora:latest-1 is that it generally gets the same kernel updates as fedora:latest, so you won’t see any reduction in churn on one of the most critical components (which is also the one most likely to see regressions).
Yes, you are right that centos is only tracking 10.0 → 10.1 changes and all those changes are relatively minor. However to illustrate my point in an unscientific, but illustrative example.
I have an alma 9 server which I manually run “dnf upgrade” about 3-4 times per week. By a quick bash script hack, I show that between the official release of 9.5 and 9.6, about 550 individual packages updated on my system in that 6 month time period.
Now I do have epel + crb + nvidia repos enabled so lets just be charitable and assume that my package churn is 2x what ublue would be. Yet this still means that churn would be ~275 package updates during that time. But – and here is my point – each one of those will themselves have multiple updates within Stream.
So during that “Interim” timeframe Centos itself will have releases for each build from 1.1.1 to 1.1.2 and then again more regular releases from 1.1.2 to 1.3.0.
Since there are no “point releases”, this “stream” of updates (literal what the name says on the tin), never stops. So once 1.3.0 is released you will start getting 1.3.1 build releases…Multiply that by all packages on your system and you will see that churn is going to be greater than what you initially expect.
For background I ran Centos from 4 to 8. When the change happen during 8, I tried to run Stream8 but quickly saw that it was unsuitable as a server OS because of the churn involved.
Yes, I guess we should’ve defined “churn” in this context.
I suppose there are 2 definitions: (a) Upgrades that are breaking config / creating ABI changes / or somehow require manual involvement to resolve or (b) Upgrades that will cause a reboot in a bootc / rpm-ostree environment.
I was meaning the later.
(note by default most bootc images run bootc-fetch-apply-updates.service which auto-applys and reboots on any image change)
For something like a nas or really any server workload, these types of reboots are unacceptable in my view.
I’ll define “churn” as I mentioned it in my initial post.
Primarily, I was referring to constantly updating kernel versions. Even when running on Fedora release-1, kernels are regularly updated. Secondarily, I’m referring to substantial package updates which are more common in Fedora than CentOS, as CentOS adheres to the RHEL ACG.
The changes landing in a CentOS update are the same as those landing in a RHEL update, but earlier and perhaps in a less batched manner. This is still massively more stable than being built on Fedora.
Arguing about a reboot required to apply updates is wildly off topic here. All rpm-ostree, bootc, image based systems in our context do have this requirement. Forcing the update immediately or not is an implementation detail and uCore does not, nor will I expect Cayo (our CentOS server project) to do so.