Just published GitHub - QC-Labs/orange-lab: Private infrastructure for cloud natives and thought some of you might find this useful.
Private infra based on K3s, Tailscale, Pulumi and Longhorn for replicated storage.
It allows you to add Linux machines to a lightweight cluster and run applications so they are available for anyone on your Tailscale network. Kind of like running docker-compose but distributed across laptops of anyone you convinced to switch to Bluefin
For now it’s mostly to setup AI components but it’s easy to add anything that can be deployed to Kubernetes (either Helm charts or just docker images).
The cluster works even on a single laptop node but it’s a great way to use spare CPU cycles and storage on other Linux machines you have available. No need to rent compute and storage from anyone.
As an example, once initial setup is done and you labeled some node as GPU - you can run Ollama, Open-WebUI and stable diffusion image generation with:
pulumi config set nvidia-gpu-operator:enabled true
pulumi config set ollama:enabled true
pulumi config set open-webui:enabled true
pulumi config set automatic1111:enabled true
pulumi up
This will create endpoints that anyone on your Tailnet can access, all wired up and ready to go.
More info in the GitHub repo.