Does this mean we get better AI integration in Bazzite?* (right now there’s no ujust command to set that up, unless that has changed, so correct me if I’m wrong)
*( with the main use case for Bazzite still being gaming, but another big one being local AI?)
For LLMs, the ujust command has been removed from Bluefin as well. The documentation is being revised to show how to use a container
For PyTorch there is nothing to do, as everyone uses devpods anyway with cuda/rocm support (those boxes were also removed from bluefin).
The framework desktop is nice from a H/W perspective: if you buy it with 128gb of RAM, you can allocate 96GB to the GPU and leave 32GB for the OS. Then you can actually load the large (80gig) LLMs into the GPU memory and interact with those locally.
The nice thing is that they will sort aim to have it running properly on Linux. In my personal experience, I had a lot of stability pain with a 4060 dGPU on my Yoga Pro 9i (whjich has fortunately settled a lot lately, so I should mention that the Nvidia drivers are getting better). But on my desktop where I have a 6700XT and a 6800 both worked perfectly out of the box with PyTorch. So I am sure AMD drivers will work day 1 (they even mentioned that in the live stream).
It’s more of QA thing from a software perspective: if you buy it and install Bazzite, running an LLM or spinning a ROCm/PyTorch container on it “will just work”. This was my main point of contention with the Nvidia 4060 in my Yoga Pro 9i. It became somewhat stable at least 6-7months after purchase…
The problem is that it is super-fast VRAM shared between CPU and GPU. So it’s a trade-off. There’s no way they can reach that level of memory bandwidth with regular RAM that is not soldered…
The RAM is tightly integrated with the CPU to achieve the best possible bandwidth. I can see why they didn’t manage to make it modular.
I’m intrigued by this device. I’m just not sure what I would use it for. I love the use-case for LAN-parties, but this doesn’t fit my current lifestyle. sigh
The video says that there is some collaboration going on between Framework and different Linux distros. So yes, we will probably get better AI support for Fedora and Bazzite, because Framework wants this computer to be a really good AI work station.
We’ve added ramalama to the -dx editions across the board, it makes management of local AI things the same as using podman. It’s same UX, etc. This will also pull down associated hardware support for vulkan acceleration etc.
It’s still early days but things are looking really good.