Whether you’re training deep learning models, running GPU-accelerated simulations, or experimenting with CUDA code, setting up your Debian Bookworm system to support Docker with NVIDIA GPU passthrough is essential. This guide walks you through the process step-by-step—from driver installation to CUDA validation inside Docker containers.

Prerequisites

  • Debian 12 “Bookworm” (fresh or existing install) A compatible NVIDIA GPU Root or sudo access
  • An NVIDIA GPU
  • Admin (sudo) access

Step 1: Install NVIDIA Drivers

Skip this step if your NVIDIA drivers are already installed and working.

1.1 Add Non-Free Repositories

NVIDIA drivers are not included in the default Debian main repository, so we need to enable contrib and non-free-firmware.

sudo apt update
sudo apt install -y software-properties-common
sudo add-apt-repository contrib
sudo add-apt-repository non-free
sudo add-apt-repository non-free-firmware
sudo apt update

1.2 Install NVIDIA Drivers (Auto-Detect Version)

This will automatically detect and install the appropriate driver for your GPU.

sudo apt install -y nvidia-driver firmware-misc-nonfree
sudo reboot

1.3 Confirm Driver Installation

After reboot, run:

nvidia-smi

You should see your GPU, driver version, and other usage stats listed. If so—congrats! Your drivers are working.

Step 2: Install the CUDA Toolkit

You have two options for installing CUDA. I strongly recommend Option A for compatibility and access to the latest features.

Option A: Install Latest CUDA via NVIDIA Repository (Recommended)

2.1 Add the CUDA Keyring and Repository

wget https://developer.download.nvidia.com/compute/cuda/repos/debian12/x86_64/cuda-keyring_1.1-1_all.deb
sudo dpkg -i cuda-keyring_1.1-1_all.deb
sudo apt update

2.2 Install CUDA Toolkit

sudo apt install -y cuda-toolkit-12-3

2.3 Add CUDA to Your Environment Variables

Append the following lines to your ~/.bashrc:

echo 'export PATH=/usr/local/cuda/bin:$PATH' >> ~/.bashrc
echo 'export LD_LIBRARY_PATH=/usr/local/cuda/lib64:$LD_LIBRARY_PATH' >> ~/.bashrc
source ~/.bashrc

2.4 Verify CUDA Installation

nvcc --version

You should see output with the version of nvcc (the CUDA compiler). This confirms a successful install.

Option B: Install via Debian Package (Simpler but Older)

If you don’t need the latest features, this is a quick method:

sudo apt install -y nvidia-cuda-toolkit

Step 3: Install Docker and NVIDIA Container Toolkit

Docker allows you to containerize your workloads. The NVIDIA Container Toolkit enables GPU support inside those containers.

3.1 Install Docker (Official Repo)

sudo apt install -y ca-certificates curl gnupg
curl -fsSL https://download.docker.com/linux/debian/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg

echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/debian $(lsb_release -cs) stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

sudo apt update
sudo apt install -y docker-ce docker-ce-cli containerd.io

Enable and start Docker:

sudo systemctl enable docker
sudo systemctl start docker

(Optional: Add your user to the Docker group)

sudo usermod -aG docker $USER
newgrp docker

3.2 Install NVIDIA Container Toolkit

sudo apt install -y nvidia-container-toolkit
sudo nvidia-ctk runtime configure --runtime=docker
sudo systemctl restart docker

Step 4: Test Docker with CUDA Support

Let’s verify that everything is working inside a container.

4.1 Run NVIDIA-SMI in Docker

sudo docker run --rm --gpus all nvidia/cuda:12.3.0-base-ubuntu22.04 nvidia-smi

You should see the same output as your native nvidia-smi—this means Docker can talk to your GPU.

4.2 Run a CUDA Sample

Let’s test with the official CUDA Samples project:

sudo docker run --rm --gpus all nvidia/cuda:12.3.0-base-ubuntu22.04 /bin/bash -c \
"apt update && apt install -y git build-essential && \
git clone https://github.com/NVIDIA/cuda-samples.git && \
cd cuda-samples/Samples/1_Utilities/deviceQuery && \
make && ./deviceQuery"

This runs the deviceQuery binary, which should output details of your GPU and say something like:

Result = PASS

That’s it! Your Debian 12 Bookworm system is now fully equipped for GPU-accelerated Docker workloads with NVIDIA CUDA support. Whether you’re training AI models, rendering video, or crunching numbers with CUDA, you’re ready to dive into powerful containerized GPU computing.

If you run into any issues, feel free to reach out or share logs—I’ve been through a few of the bumps myself and would be happy to help.

Leave a Reply

Your email address will not be published. Required fields are marked *