Using uCore with git-based deployment workflow

Hello, I’m curious if uCore is a suitable base for IaC in a homelab for something like a container host. I know I can bake my own images and deploy them with Ignition config. But how about changing the running setup to add or a remove a container using some sort of a git workflow?

So far how I understand it:

  • Ignition is run only on first boot so ading/removing containers would mean a full redeploy.
  • I can produce a new base image and rebase a deployed machine to it, but that’s more a system update level thing, adding new packages etc.
  • I can then modify running configuration directly on the host, but that wouldn’t be reproducible unless I use Ansible or something similar.

In the wild so far I’ve only seen this project that apparently follows the full redeploy path. Personally I’m more inclined to include most of the configuration, including containerfiles, in my custom OCI image and only leave user setup in Ignition (because it’s a pain to bake into the base image).

Are there any other options or did I misunderstood something?

The base image should not contain configuration/containerfiles. Only the software. Configuration (containerfiles) should be viewed as data and managed separately.

I want to do something similar, but I’m going to make my own image of uCore and install k3s, a really small Kubernetes distribution (I want to replace my current Ubuntu server). I saw blog articles about it.

Of course, Kubernetes may be something of an overkill for your needs. As I work daily with Kubernetes, it’s was the obvious decision for me. A typical git-based deployment workflow uses GitOps principles with Kubernetes and a GitOps tool like Argo CD or Flux. Argo CD seems to be the most popular one.

The idea is to manage the deployments in git. When you change the containerfiles in git, the running containers in uCore are updated. That means you need a GitOps tool to check the git repo regularly and synchronize the server with the git source. You manually configure the GitOps tool once, and then it’s all automated with git.

For something simpler and less complex, a quick search on gitops and podman revealed this Red Hat blog: FetchIt: Life-cycling and configuration of containers using GitOps and Podman.

The fetchit project seems alive and well. It is most probably a lot simpler than the whole complex Kubernetes+GitOps ecosystem. I’m almost tempted :grin: but I like doing things the hard way. :rofl:

2 Likes

Thank you, this is a very helpful answer. I will check out FetchIt, it seems like what I need.

I do in fact think of eventually reaching Kubernetes and GitOps as my runtime, but this will take a while with my limited tinkering time. So I’m just attacking one problem at a time.

1 Like

Yes, I know what you mean. :grin:

1 Like

I ran k3s on a Raspberry Pi 4B+ cluster (3 nodes) for a few years while I was learning k8s. I was able to use helm to install OpenFaaS. I ported a couple of my apps at the time to use it.

I found them at cncf.io. There are enough projects there for a lifetime of exploration. Although OpenFaaS does not seem to be on th dashboard any longer, I used to (before I retired) rely on that org to learn what is hot or up-and-coming.

1 Like

Thanks for posting about fetchit, hadn’t seen that project before…however I think there is a simpler “batteries-already-included” approach: Quadlet files (specifically quadlets with AutoUpdate flag set in the .container file).

If you don’t know, Quadlets are a sort of blending of systemd and podman…you place a .container file (which looks alot like a systemd .service file) in /etc/containers/systemd or ~/.config/containers/systemd and then they are managed by systemctl

As an example, here is my /etc/containers/systemd/zerotier.container:

[Unit]
Description=Zerotier-one Service
After=network.target

[Container]
Image=docker.io/zyclonite/zerotier:latest
AutoUpdate=registry
Network=host
AddCapability=NET_ADMIN
AddCapability=SYS_ADMIN
AddDevice=/dev/net/tun
Volume=/var/lib/zerotier-one:/var/lib/zerotier-one:z

[Install]
WantedBy=multi-user.target

This results in a zerotier.service file that can be managed via systemctl (ie. systemctl start zerotier.service, etc). By setting AutoUpdate=registry means that a call to podman auto-update will auto-update the container from either registry/local

There is an alternative approach, which is to set Pull=newer in .container file…then on every start / stop (ie. each boot) of container it will auto pull newer container. Personally, for various reasons I prefer to just manage via podman auto-update.

I think using quadlets + some sort of gitops orchestration can achieve what you want. I personally just use ansible to push my container files to my nas (running ucore). But I’ve been in process of gitops enabling this…where any changes to .container file via git will launch a git workflow script that then calls ansible to push changes to nas.

My goal is to make the boot drive of my nas as immaterial…meaning if boot drive blows up I could replace and redeploy entire nas via a ansible script. As long as all your data is on proper raid-array, the boot drive itself could even just be an attached usb drive. No need to worry about ‘protecting’ boot drive since its designed to be replaced.

Thanks for the answer. The problem with quadlets is that documentation points to deploying them at the initial image deployment. I am adding and removing them often enough so it’s not the nicest idea. They are placed in mutable directories so it is doable without a redeploy, but that would still be a bit clunky to implement in GitOps.

A dedicated GitOps solution like FetchIt sounds much nicer.

1 Like

fetchit does look like a nice, purpose-built solution…however I’m curious: Why not just use some type of “git actions / workflows”

I’m asking because I’m unclear on what exactly fetchit does that couldn’t be done already from a git workflow?