Building a bootable qcow2 virtual disk image from an OCI OSTree image

Hi folks :wave:

I’m with ii.nz :new_zealand: and am trying to build bootable USB drives for things like education for classes and shared team development environments.

Recently I came across GitHub - osbuild/osbuild-deploy-container, where a qcow2 image can be built, given an OCI OSTree image (i.e: a Universal Blue image).

I built a qcow2 image for my u image with

mkdir -p ./output
cat << EOF > ./output/config.json
{
  "blueprint": {
    "customizations": {
      "user": [
        {
          "name": "ii",
          "password": "ii",
          "groups": ["wheel"]
        }
      ]
    }
  }
}
EOF

podman run \
  --rm -it --privileged \
  -v $(pwd)/output:/output \
  ghcr.io/osbuild/osbuild-deploy-container:latest \
    -imageref ghcr.io/bobymcbobs/u:latest \
    -config /output/config.json

the image generated in ./output/qcow2/disk.qcow2 is bootable in your favourite QEMU supporting virtualisation software (like libvirt+virt-manager).


Currently I’m navigating the path to move the image from a VM disk image to being a physical boot media.

I’m aware about qemu-img dd and I currently think it’s the right tool to continue on this path

qemu-img dd -f qcow2 -O raw bs=4M if=disk.qcow2 of=disk.raw

After this, I write to a local SSD with etcher, though it’s not currently booting and is instead stuck at a blank screen.

Let me know if you have any ideas!
Cheers!

2 Likes

I made some forks of the tooling to try and build a raw image instead of a VM image

and using

mkdir -p ./output
cat << EOF > ./output/config.json
{
  "blueprint": {
    "customizations": {
      "user": [
        {
          "name": "ii",
          "password": "ii",
          "groups": ["wheel"]
        }
      ]
    }
  }
}
EOF

podman run \
  --rm -it --privileged \
  -v $(pwd)/output:/output \
  ghcr.io/bobymcbobs/osbuild-osbuild-deploy-container:latest \
    -imageref ghcr.io/bobymcbobs/u:latest \
    -config /output/config.json

a raw image is produced.

After writing the disk.raw file to a local SSD and trying to boot on my lower tier test machine (ASUS E410M); it has the same blank screen upon selecting the boot option.

Out of curiousity, I tried to boot the disk.raw image in a VM and it boots just fine.

Currently unsure whether the issue is stemming from one of the following

  • the format of the image produced
  • the local machine

Cheers!

So I’m running into a weird error.

efibootmgr-18-4.eln131.x86_64
dosfstools-4.2-8.eln131.x86_64
Creating group 'systemd-coredump' with GID 998.
Creating user 'systemd-coredump' (systemd Core Dumper) with UID 998 and GID 998.
Creating group 'systemd-timesync' with GID 997.
Creating user 'systemd-timesync' (systemd Time Synchronization) with UID 997 and GID 997.
deleting the fake machine id

⏱  Duration: 22s
org.osbuild.selinux: 008587f22809ff3de17cdffe7cf859e54bcb6f539efef3dc87d5611b8d9c6800 {
  "file_contexts": "etc/selinux/targeted/contexts/files/file_contexts",
  "labels": {
    "/usr/bin/cp": "system_u:object_r:install_exec_t:s0"
  }
}
/usr/lib/tmpfiles.d/journal-nocow.conf:26: Failed to resolve specifier: uninitialized /etc/ detected, skipping.
All rules containing unresolvable specifiers will be skipped.
setfiles: Could not set context for /run/osbuild/tree/usr/lib/systemd/system-generators/systemd-fstab-generator:  Invalid argument
setfiles: Could not set context for /run/osbuild/tree/usr/lib/systemd/system-generators/systemd-rc-local-generator:  Invalid argument
setfiles: Could not set context for /run/osbuild/tree/usr/lib/systemd/system-generators/systemd-sysv-generator:  Invalid argument
Traceback (most recent call last):
  File "/run/osbuild/bin/org.osbuild.selinux", line 75, in <module>
    r = main(args["tree"], args["options"])
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/run/osbuild/bin/org.osbuild.selinux", line 62, in main
    subprocess.run(["setfiles", "-F", "-r", f"{tree}", f"{file_contexts}", f"{tree}"], check=True)
  File "/usr/lib64/python3.12/subprocess.py", line 571, in run
    raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['setfiles', '-F', '-r', '/run/osbuild/tree', '/run/osbuild/tree/etc/selinux/targeted/contexts/files/file_contexts', '/run/osbuild/tree']' returned non-zero exit status 255.

⏱  Duration: 4s

Failed
running osbuild failed: exit status 1

Any insights? I’m pretty much copy-pasta’ing.

1 Like

I’m noticing different behaviours on different machines, even though it’s in a container.

I’m using machine on Equinix Metal, a RHEL8 one and an Ubuntu one; both with Podman.
The build works when running on the RHEL8 machine but not the Ubuntu one.
The Ubuntu machine gets stuck on

org.osbuild.copy: 9be0f0970bcb5f7007290af06ab0f981d6c8aa86a0ba0c786b7e89b442dd3bcd {
  "paths": [
    {
      "from": "input://root-tree/",
      "to": "mount://-/"
    }
  ]
}
device/- (org.osbuild.loopback): loop6 acquired (locked: False)
device/boot (org.osbuild.loopback): loop7 acquired (locked: False)
device/boot-efi (org.osbuild.loopback): Exception ignored in: <function Loop.__del__ at 0x7fd85156a3e0>
device/boot-efi (org.osbuild.loopback): Traceback (most recent call last):
device/boot-efi (org.osbuild.loopback):   File "/usr/lib/python3.12/site-packages/osbuild/loop.py", line 137, in __del__
device/boot-efi (org.osbuild.loopback):     self.close()
device/boot-efi (org.osbuild.loopback):   File "/usr/lib/python3.12/site-packages/osbuild/loop.py", line 144, in close
device/boot-efi (org.osbuild.loopback):     fd, self.fd = self.fd, -1
device/boot-efi (org.osbuild.loopback):                   ^^^^^^^
device/boot-efi (org.osbuild.loopback): AttributeError: 'Loop' object has no attribute 'fd'
Traceback (most recent call last):
  File "/usr/bin/osbuild", line 33, in <module>
    sys.exit(load_entry_point('osbuild==98', 'console_scripts', 'osbuild')())
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.12/site-packages/osbuild/main_cli.py", line 169, in osbuild_cli
    r = manifest.build(
        ^^^^^^^^^^^^^^^
...

Potentially a host kernel limitation with one OS having SELinux and one not?


@jeefy, what’s your host machine OS?

@jeefy, what’s SELinux configured like on your machine?

grep '^SELINUX=.*' /etc/selinux/config.

The RHEL8 server I’m using has it set to permissive.

1 Like

:100: was SELinux, setting to Permissive has gotten me past that error :slight_smile:

1 Like

Trying from a different angle, I managed to install successfully to a SSD drive using bootc:

sudo podman run -it --rm \
  --priviledged -pid=host \
  --security-opt label=type:unconfined_t \
  ghcr.io/bobymcbobs/u:latest \
    bootc install \
    --wipe --target-no-signature-verification \
    /dev/sda

Heya! I’m one of the maintainers of osbuild-deploy-container. This project is very young and was basically hacked up in a day by me and Achilleas in a day, so there are indeed some rough edges. However, I’ve spent the last 2 weeks figuring out the overall vision, and I hope that we will be able to ramp up the developer effort on this tool next week.

Some highlights:

  • The latest version works on SELinux-enforcing systems. Sorry about that, that was an oversight!
  • The loop issue is fixed in osbuild upstream, we’re working on getting the fix into osbuild-deploy-container asap. Fun fact: If you run the build enough times, it will succeed at some point.
  • I want to rework the CLI soon-ish. I apologize for any breakages in the near future. I hope we can stabilize it asap.
  • One item high on our roadmap is support for creating offline ISOs. This is our preferred way to install bare-metal machines.
  • About supporting raw - I’m not really sure, I would much rather have people using a headless installer instead, because it’s able to cover some corner-cases that raw cannot. However, you can always use qemu-img to convert qcow2 to raw outside the container (or just abuse the fact that the container has qemu-img inside): qemu-img convert -O raw your.qcow2 your.raw

If you have any suggestions/feedback, I’m all ears!

2 Likes

Hey @ondrejbudai!
Thank you kindly for your message!

It’s a nice implementation of osbuild. Thank you for your work and continued maintenance.
Offline ISOs sound like a helpful idea too!

The needs for the project I’m working on is that the OS is a preinstalled desktop, with a user account and ideally Flatpaks too without the need for a firstboot installer; all on a bootable USB device.
I had trouble with booting the raw image produced in my fork on my hardware, I’ll try the qemu-img convert command.
The qemu images, with my wrangling, didn’t work booting in anything not in a VM either; this might make sense though.

I was also wondering, what dependency does osbuild-deploy-container have on the host system?

When trying to boot from a disk with the raw converted version from the qcow2 image, there’s just a blank screen and it doesn’t yet go to grub.

I was also wondering, what dependency does osbuild-deploy-container have on the host system?

You need to be able to run podman rootful (should work with docker but I’ve never tested). Otherwise, everything is in the container, so the only dependency should be the kernel. There are certainly many corner-cases here. For example, imagine that you want to build your image with btrfs (not supported yet), but the host kernel doesn’t support btrfs (all RHEL distributions and friends). The build will fail, because the tool cannot mount the newly created partition.

However, I strongly believe that these corner-cases will be quite rare. :crossed_fingers: Also, you can “always” run the container in a VM (we might provide some integration for this future, we know there’s interest, but no promises).

When trying to boot from a disk with the raw converted version from the qcow2 image, there’s just a blank screen and it doesn’t yet go to grub.

Which container image are you converting? Which tool are you using? Does the target machine support BIOS/UEFI/both?

OK yeah, true. The kernel dependency makes sense. I’m curious to investigate some further for the other systems I tried on.

When osbuild is installing packages that would be inside the environment correct?

Which container image are you converting? Which tool are you using? Does the target machine support BIOS/UEFI/both?

the system is booted UEFI and appears to support both.

In the process of figuring out the practical differences between the product of osbuild-deploy-container and bootc, beginning with bootc, I ran bootc wrapped with strace and found all syscalls+execs+opens then discovered a few commands it calls and files it writes to.
The command I used is

sudo podman run -it --rm --privileged \
  --pid=host --security-opt label=type:unconfined_t \
  -v "$PWD:$PWD" ghcr.io/bobymcbobs/u:latest \
    strace -ff -o "$PWD/bootc" \
      bootc install --wipe \
        --target-no-signature-verification /dev/sda

given, strace is installed in the image.

The disk partition layout is

  • 1MB BIOS
  • 500MB FAT named EFI-SYSTEM
  • 500MB Ext4 named boot
  • ~ Btrfs named root

Here are a set of working commands to provision an install, minus partitioning

  1. mount /dev/sda4 /mnt
  2. mkdir -p /mnt/boot
  3. mount /dev/sda3 /mnt/boot
  4. mkdir -p /mnt/boot/efi
  5. mount /dev/sda2 /mnt/boot/efi
  6. ostree admin init-fs --modern /mnt --sysroot=/mnt
  7. ostree admin os-init fedora --sysroot=/mnt
  8. ostree container image deploy --imgref=ostree-unverified-image:docker://ghcr.io/bobymcbobs/u:latest --stateroot=fedora --target-imgref=ostree-remote-registry::ghcr.io/bobymcbobs/u --karg=rw --karg=console=tty0 --karg=console=ttyS0 --karg=root=LABEL=root --sysroot=/mnt
  9. ostree config set sysroot.bootloader none --repo=/mnt/ostree/repo
  10. ostree config set sysroot.readonly true --repo=/mnt/ostree/repo
  11. bootupctl backend install --device /dev/sda --src-root / /mnt
  12. grub2-install --target i386-pc --boot-directory /mnt/boot --modules "part_gpt" /dev/sda
  13. podman unshare
  14. IMG_MNT=$(podman image mount ghcr.io/bobymcbobs/u:latest)
  15. cp $IMG_MNT/usr/lib/bootupd/updates/EFI/fedora/shimx64.efi /mnt/boot/efi/EFI/BOOT/
  16. cat $IMG_MNT/usr/lib/bootupd/grub2-static/grub-static-{pre,post}.cfg > /mnt/boot/grub2/grub.cfg
  17. cp $IMG_MNT/usr/lib/bootupd/grub2-static/grub-static-efi.cfg /mnt/boot/efi/EFI/fedora/grub.cfg
  18. podman image unmount --all
  19. exit the unshare
  20. echo "set BOOT_UUID=\"$(blkid /dev/sda3 -po udev | grep ID_FS_UUID= | head -n 1 | cut -d= -f2 | tr -d '\n')\"" | tee /mnt/boot/boot/grub2/bootuuid.cfg
  21. efibootmgr --create --disk /dev/sda --part 2 --loader \\EFI\\fedora\\shimx64.efi --label FEDORA

It’s missing an /etc/fstab amongst potentially other things, still sorting through it.

The disk /dev/sda can be replaced with a loop device to install to a disk image.

I’m wondering if I try to deploy with osbuild-deploy-container and then run steps 11-21, bootloader stuff, if it will boot. I’ll try that next.

I managed to get a disk to boot which has been flashed with a qcow2 image built by osbuild-deploy-container converted to raw, with steps 11-21 in the above comment run against it, effectively re-installing the bootloader.

1 Like