BTRFS - New to Bluefin and BTRFS suggestions?

Is there a guide for setting up BTRFS on Bluefin so I can use it to rollback changes on the user ‘data’? I see the file system is btrfs, but I’m not sure if there are any special considerations for using something like ‘snapper’ to checkpoint and rollback changes on the ‘data’ in the users area, given the modified filesystem layout for Bluefin? (or will this work using normal btrfs guides, just making sure I watch out and account for the /var/home rather than the /home layout?

My main ‘use case’ if for ComfyUI … updating and adding new custom-nodes is very hit or miss as to whether it will work, or hose the whole ComfUI install.

Right now I run ComfyUI in a distrobox; More than one, if I find I need two conflicting Python/pyTorch/CUDA configurations for different workflows, but I only know I need a new Distrobox configuration AFTER I break my ‘production’ Distrobox.

What I do right now to ‘checkpoint’ the distrobox is before any updates or additions I use ‘podman commit’ to create a new image from the container in question. I name/tag the new image so I can push it to a project on my Docker Hub account (I plan on running a local image repository eventually, but that’s a ‘next step’ thing.

If the update breaks the ComfyUI I just delete the container and recreate it using Distrobox the same way I originally created it (adding the --nvidia and --home parameters) but pointing to my saved checkpointed image on my project on Docker Hub.

This works except I need to manually remove any ‘custom_nodes’ that were added so they don’t reinstall the breaking python or linux packages.

It would be nice to just run a script that checkpoints the Distrobox (podman) and the ComfyUI data (btrfs) so I wouldn’t have to manually monitor and undo the host file changes in case of rollback.

So you need to know if /home is altered or managed by the system? Or /var/home?

Sounds like there’s probably a better way to do what you’re doing, but if you want to create and restore snapshots, you can install btrfs-assistant.

rpm-ostree install btrfs-assistant

Reboot, and you’ll find the application in Show Apps.
You can also set it up to automatically create and expire snapshots, create a snapshot on boot, etc.

Create a new config:

Create a snapshot:

List of snapshots where you can restore if wanted:

(Snapshot named based on assumptions based on the imagery on the ComfyUI website.)

1 Like

Yeah … I’m playing with it now.

Looks like by default /var/home and /var are included as ‘subvolumes’ so I installed snapper python3-dnf-plugin-snapper and btrfs-assistant.

In btrfs-assistant I created a config for /var/home and /var and did a manual snapshot for both subvolumes before I started making changes to a ComfyUI install. I did a little bit of the install and then did a rollback … rebooted, and I was back to where I wanted to be.

A bit more testing to do, but it looks very promising.

I don’t need to be saving the images to dockerhub either, except as a backup for any custom distrobox images I might want to make ie. in this case I made one with some extra Ubuntu packages installed that are usually needed for a ComfyUI installation (git, build-essential, ffmpeg, vim) so it saves me a step in setting up the environment if I need to build a new distrobox.

Anyway, I think I answered my own question, so unless someone has some comments, suggestions or questions … thanks!

Thanks Jay,

I responded while you were given me what I was looking for :wink: … that’s just about exactly what I did. Seems to be what I wanted.

Cheers

2 Likes

I’m not sure the dnf plugin will get called but perhaps a manual snapshot script would be best before your changes. The automatic snapper service takes snapshots every hour, regardless of how often you want it to.

I hadn’t heard of Comfy… Looks very interesting. Going to try it out soon.

Probably best if you have a decent GPU (nvidia works best)

I have a nvidia 4070 ti super (16GB vram.) Not the best, but until the 5090 series comes out it’s about the 3rd best … you’ll need a GPU with minimum 4GB, but that will be slow (you can use CPU, but that’s probably 20 minutes to create a simple image that my GPU can create in less than 1 second)

8GB vram GPU is probably the minimum you can use without being frustrated and giving up. 12GB vram will probably allow you to run most all workflows (but some will need fiddling around to prevent running out of vram; 16GB vram will allow you to run nearly all workflows, where a 24GB vram (3090 & 4090 nvidia) will run just about all workflows.

I create my distrobox environments using a command similar like:

distrobox-create --image ubuntu:20.04 --name --nvidia --home <path to where you want the custom $HOME to be>

if you don’t have an nvidia card, then you leave out the --nvidia option (I think, but I only have an nvidia gpu, so can’t say for sure what to do if you have an AMD gpu.)

Good luck.

1 Like

Thanks! I have a 3080 with 16. Is it correct that 12gb vram just runs things slower, or will it actually fail to run more demanding workflows?

Depends on the job.

I don’t think it’s necessarily the amount of memory that determines the ‘speed’ per say.

I all things being equal (same class GPU) the main issue for memory size is just ‘will it’fit’? If it doesn’t then you get an error and the generation stops.

It’s up to your workflow to make sure that doesn’t happen (ie, chunk up the generation process from one large job to many smaller jobs … usually you see that in video generation where you might have to process a batch of 10 6 frame jobs vs in just one vs 60 frame job.

Batching makes the process slower, but it not necessarily the GPU ‘speed’ that causes this slowness.)

If I were to upgrade to a RTX-4090 I’d get 33% more memory, and maybe a doubling of performance (and price.) but that speed improvement is more about the number of cores (doubled), double L2 cache and 50% wider memory bus width.

Where the memory size comes in, is in the ease in how you can manage it (sometimes, like in 4GB gpus you basically can’t easily manage it and many workflows are not really possible (or would takes days to process) but even the 24GB 4090 can run into issues if the jobs are too memory size intensive.

The 16 GB should give you the same abilities as my 6070 ti Super … just slower architecture. This means you have access to pretty much all workflows, but there will be many cases where you would want to use ‘batching’ techniques’ to keep the individual jobs within the 16GB limit. This slows down the total job, but my 4070 will run those individual jobs faster (most workflows are able to work with 12GB, and many with even 8GB, but they need to be ‘batched’ and they will run slower, sometimes much slower.)

Best thing is to play around a bit, learn how to use ComfyUI and come up with strategies for doing what you want to get done in an efficient manner (if you’re need something that your 3080 can’t handle, and it’s important, you can use a cloud GPU service, rent a 4090 (or better) for a $1 or 2 an hour, and move your workflow over to an already fully configured system and do the final generation on that system.)

1 Like

GPU memory, or the lack thereof, can be a contributing factor. A GPU with more memory means there is less need to transfer data into and out of the GPU, which speeds processing. E.g. running with a batch size of 32 may be faster (and train better) than running with a batch size of 16.

That is true, but I was talking about speed differences across different models of the same brand/architecture GPUs. And trying to relate ‘memory speed’ with ‘overall rendering speed’ vs memory size vs bus width vs GPU cores.

If I recall correctly the actual VRAM memory specs in my 4070tiSuper and a 4080 are the same. The 4070 has either a slightly wider bus, and/or slightly more cores (I don’t recall specifically) which makes it slightly faster despite having the same speed and amount of memory

it may well be the the 3090 has more memory, but my 4070 is faster (at least in workloads where no ‘chunking’ is required … if there is, then I’m not sure how well the 4070 would stack up, but even if it were faster for jobs with only a few ‘chunks’ , the chunking overhead would eventually catch up with it and the 3090 might start pulling ahead.for large jobs with lots of chunking.

1 Like