Running a local container registry

I have a fairly basic question and I hope that people with a more container experience can share their perspective:

Does it make sense to run your own container registry on localhost? That is, run a container like docker.io/registry and then push images to it?

I really don’t care about keeping older images around and I’m the only user on my system. Right now I simply use podman’s image cache for my local images.

Yep! Follow these instructions!

1 Like

Thanks but what are some of the advantages compared to only running images from the podman cache?

It’s more useful if you have multiple machines, probably not so much with a single computer/image.

1 Like

@stego, if you go down the route with a local docker registry you might also be interested in GitHub - ublue-os/forge: On-prem Universal Blue. It includes the docker registry and configures it including a self-signed certificate. You can use the registry only or if you don’t want run tons of CLI commands to build and upload images it also has a UI and just command runner.

1 Like

I did not understand what @j0rge was doing to be honest due to the number of technologies involved, so I began to refactor the polyrepo and in the process it seems like I learn something new everyday. I highly suggest you do this correctly but using forge while maybe building forge images is going to be a mess. First, you’re going to need your own home lab and bite the bullet. I’ll give you the simple answer if you have a laptop/desktop laying around:

https://github.com/alexellis/arkade

arkade install ingress-nginx
arkade install cert-manager
arkade install docker-registry
arkade install docker-registry-ingress
–email web@example.com
–domain reg.example.com

Or a bit heavier Incus will act as an image server:

https://linuxcontainers.org/incus/docs/main/reference/image_servers/

But the problem with all that (as Incus notes) is that you need authentication. I went down this route and even taking reasonable routes if you’re wanting your own image server you probably need a proper home lab. Go to /r/homelab and just kinda of post your budget and your needs. I needed GPU path through and if you’re accessing this from the outside there’s no “kind of being secure” though I trust LXC/containers with Incus and K8S will reasonably work. I separate them out into separate NICs and then do a passthrough with Hetzner simply to get access in a country with strong privacy rules. Its a good learning experience and if you’re smart you realize you can provision your daily builds as system container/VM and just sync to your local laptop to you get bluefin. But don’t skip security. Dunno what you’re doing but I got a setup with GPUs as much RAM as I could get, a NAS on ZFS I pumped full of RAM. I can theoretically cluster it out at this point within reason. And also test out multi-tenancy on my K8S cluster (the boring parts of billing are the hardest).

But it sounds harder than it is. The only reason for privacy and encryption was that I had an AI app for my friend’s kid that hard guardrails in place but I had to train speech-to-text on a toddler and also he’d like say like “I want a story about whatever that Australian dog show is but he meets like another copyright character and they become best friends.” The first two apps just happened to fit my use case except they were autogenerated so like InvokeAI should be a separate service layer between the UI and the model, as the ollama-ui because lets you plug it in an image generation and speech-to-text to model. Once I figured out you apparently just took the container file and converted it automatically to a quadret it made sense. Same thing with text-to-speech, you could prototype against the fancy UIs. Plug the APIs if done correctly in one layer and basically make sure you don’t get something inappropriate before you go and actually build the app.

Previously I did this: https://huggingface.co/transformers/v2.2.0/installation.html if not even lower level. But making a one off app for your friend’s kid lets you kind of prototype then take the models without needing to invoke notebooks off the bat.

The dad just so happens to be a senior partner at a law firm leading their AI practice, and informed me that there’s so many nonsensical laws regarding minors, copyright and the rest it is to best to keep it private and encrypted. Not really Bluefin related but that’s how I got ended up with first a private repo to homelab to to please don’t go to jail. Unfortunately it is the perfect platform for something like AI where you can play with pretty UIs to to see what’s feasible then dip down and create an app.

Too bad so many states have laws about this or about to be voted in. Again, not bluefin directly related but that’s how I got to the private repo spot. Too bad I was just going to normalize AI builds to cross talk but even that is possibly not legal. FWIW the kid (and parents who want them to go to sleep) can just give him his iPad and if with some simple prompts (icons). Children also feel more open to asking questions to their iPad than to their parents.

I was literally planning on showing off how you can use Bluefin to go from concept to a real app, which I had no intention of making a commercial product. But unfortunately I’d stay away from AI even though that’s a really good use case.

I’m probably stirring up a can of worms here, but why exactly should using the “forge” be a mess? I don’t get that. And just to make things clear, the “forge” does not need a home lab. I have it running on my very old laptop which I use for my daily work.

I may have misjudged usage. I locally have the repos pulled down and web hooks for a local git instance. So I’m building forge from source then populating in a separate cluster after the builds complete locally with GitHub runner and custom images. I use local GitHub runners and K8S OCI image repo to keep the images and forges a mere convenience. I’m not used to building native apps/containers in K8S rather than highly scalable apps. I use it basically as a task runner to build multiple projects then deploy it to a “built cluster” that versions the apps. Using vCluster I can basically experiment on the base repos create an ephemeral branch based on a pull request setup DNS automatically via Tekton and use fluxcd because I’m used to that but otherwise it’s giant self hosted repo that builds native apps. I haven’t got the workflow down as I don’t need HA and want to use LxC/LXD instead of VMs and for it to max out during builds using caching with Bazel. If anyone knows how to get it working in incubator LXC/LXD with Podman I’d gladly share it out. I can’t figure it out but if someone can let me know. Again I’m used to large scale web apps not building basically native apps.