Machine Learning

Ollama allows the running of open-source large language models, such as Llama 3, locally. It bundles model weights, configuration, and data into a single package, defined by a Modelfile, and optimizes setup and configuration details, including GPU usage.

Bluefin-dx supports the installation of Ollama in different ways, for example by using the following ujust commands:

  • ujust ollama installs the CLI-version as a container.
  • ujust ollama-web installs Open Web UI & Ollama as a container. During the installation process, there is the choice to install either a GPU or CPU-enabled version.

Additionally, installation through Homebrew (brew install ollama) is required.

systemd does not autostart the containers; instead, the user needs to activate the script manually by using systemctl --user start ollama or systemctl --user start ollama-web.

The systemd scripts are saved under: ~/.config/containers/systemd. The scripts are:

  • ollama.container - which starts the CLI under port: 11434
  • ollama-web.container - which starts the Open Web UI under port: 8080 (http://localhost:11434)
  • ollama.network, the network name is set as “ollama”

To cross-check if the containers are launched correctly, you can use podman ps --all.

1 Like