Running nvidia-smi under docker is an order of magnitude slower than running under podman.
time docker run --rm --gpus all ubuntu nvidia-smi
Fri Oct 18 15:02:56 2024
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 560.35.03 Driver Version: 560.35.03 CUDA Version: 12.6 |
|-----------------------------------------+------------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
| 0 NVIDIA TITAN RTX On | 00000000:05:00.0 On | N/A |
| 41% 36C P8 23W / 280W | 992MiB / 24576MiB | 13% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
+-----------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=========================================================================================|
+-----------------------------------------------------------------------------------------+
________________________________________________________
Executed in 41.27 secs fish external
usr time 28.96 millis 0.00 millis 28.96 millis
sys time 30.21 millis 1.25 millis 28.96 millis
user001@hp-z620.lan1.home ~> time podman run --rm --security-opt=label=disable --device=nvidia.com/gpu=all ubuntu nvidia-smi
Fri Oct 18 15:03:48 2024
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 560.35.03 Driver Version: 560.35.03 CUDA Version: 12.6 |
|-----------------------------------------+------------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
| 0 NVIDIA TITAN RTX On | 00000000:05:00.0 On | N/A |
| 41% 36C P8 22W / 280W | 1012MiB / 24576MiB | 1% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
+-----------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=========================================================================================|
+-----------------------------------------------------------------------------------------+
________________________________________________________
Executed in 488.19 millis fish external
usr time 113.31 millis 0.19 millis 113.12 millis
sys time 83.85 millis 1.04 millis 82.81 millis
user001@hp-z620.lan1.home ~> cat /etc/os-release
NAME="Aurora"
VERSION="40.20241015.0 (Kinoite)"
ID=aurora
ID_LIKE="fedora"
VERSION_ID=40
VERSION_CODENAME="Archaeopteryx"
PLATFORM_ID="platform:f40"
PRETTY_NAME="Aurora-dx 40 (FROM Fedora Kinoite)"
ANSI_COLOR="0;38;2;60;110;180"
LOGO=fedora-logo-icon
CPE_NAME="cpe:/o:universal-blue:aurora:40"
DEFAULT_HOSTNAME="aurora"
HOME_URL="https://getaurora.dev/"
DOCUMENTATION_URL="https://docs.projectbluefin.io"
SUPPORT_URL="https://github.com/ublue-os/bluefin/issues/"
BUG_REPORT_URL="https://github.com/ublue-os/bluefin/issues/"
SUPPORT_END=2025-05-13
VARIANT="Kinoite"
VARIANT_ID=aurora-dx-nvidia
OSTREE_VERSION='40.20241015.0'
BUILD_ID="ff123d2"
Tried googling, but no luck.
Also, is there any specific reason why the rest of the nvidia tools/libraries like libcublas, nvcc, libcudart1, etc are not installed in the base OS?