Skip to content

Docker

Docker is a platform for containerization of software, easy deployment and scalability. Basic usage is relatively simple, and most Linux knowledge can be applied to it. The main problem for running the projects on Docker is getting OpenGL acceleration to work, as the focus of it are compute workloads (CUDA, ML) or services (Jellyfin, NextCloud, APIs, etc).

There are quite a lot of combinations in hardware1, platform and intention to use it, and guides like this can only go so far, and focuses on getting OpenGL working.

Docker can't open native GUIs on the Host OS • The intended usage are:
  • Implementing a backend e.g. with FastAPI
  • Serving and acessing a Gradio web page
  • Isolation, security or Headless usage

⚡️ Installing

  • (Windows) Install WSL2, default Ubuntu 22.04 distro is fine
    PowerShell
    wsl --install
    
    • Preferably add an user with sudo adduser <username> (inside wsl)
    • And make it default ubuntu config --default-user <username>

  • Install Docker Desktop for your platform or Package Manager
    • Linux users might only want Docker Engine, per bloat and licensing model
    • Windows: Enable Settings > Resources > WSL Integration > Default distro

  • (Linux) You might need to install Docker Compose if you distro splits it

  • (NVIDIA) Install the NVIDIA Container Toolkit for your Distro
    • I don't have to say "Have NVIDIA Drivers installed", on the host system, do I?
    • Windows: Follow the apt instructions on the link above, inside WSL

DO NOT INSTALL NVIDIA OR DISPLAY DRIVERS (MESA) ON THE WSL DISTRO PER NVIDIA DOCS


Restart the Docker Engine:

  • (Linux) Run sudo systemctl restart docker on the Terminal
  • (Others) Close and open Docker Desktop on the System Tray

(Windows) It may be a good idea to reboot the whole system


🚀 Context

Per Monorepo structure, I've configured a .docker-compose.yml file that builds a base.dockerfile with common dependencies, and hopefully OpenGL Acceleration. The other dockerfiles starts of at the end of this base image for the specific project

Have enough RAM and don't want to hurt your SSD's TBW?

Edit or create the file sudo nano /etc/docker/daemon.json and add:

{
    "data-root": "/tmp/docker",
    // ...
}

Most Projects uses ModernGL for interfacing with OpenGL, that renders the Shaders. The Context creation is handled by glcontext, which selects the proper platform's API to use

What to avoid

Long story short, we want to avoid at maximum using x11 inside Docker and even on native Linux !. The code is feature-frozen but with many technical debts, requires a real "Display" for Graphics APIs (OpenGL, Vulkan) to even work, and there is no headless mode

  • One might think that prepending the commands with xvfb-run could work, but this will always use Software Rendering, which happens entirely on the CPU - a fraction of the speed of a GPU. So, we want to avoid xvfb at all costs

This isn't an issue per se when running natively, as OpenGL Contexts created on a live Desktop Environment WILL have GPU Acceleration via GLX, provided by the current driver. Or EGL itself, if we're running Wayland


Why we want EGL

Luckily, Khronos Group developed EGL, and NVIDIA the libglvnd libraries. Together, EGL provides context creation directly on OpenGL without relying on WGL/CGL/GLX, so we can have true GPU accelerated headless contexts, and libglvnd a vender-neutral dispatch for so

Well, not so fast. That is, if the available devices are GPUs themselves. It is well known that NVIDIA provides their own Proprietary Drivers and firmware for their GPUs, with shared libraries (`.so` files on Linux, `.dll` on Windows) pointing to their driver's libraries; while AMD and Intel GPUs on Linux runs the Godly Mesa Project. Mesa always at least provides llvmpipe device, which is a Software Rendering fallback device


Native Linux vs WSL

Now, here's where it gets tricky. Docker is running a virtualized Linux machine always, but inside a pseudo-native Linux in WSL (three layers lol). The previously installed NVIDIA Container Toolkit deals with both cases slightly differently:

  • On Windows, the NVIDIA drivers used are from the Host (Windows itself), "redirected" to WSL. The wrapped binaries are found at /usr/lib/wsl on the WSL distro, provided by the container toolkit This is why no drivers should be installed on WSL. The llvmpipe device can be a pointer to d3d12.so file with actual GPU Acceleration

  • On Linux, the NVIDIA drivers used are from the Host (Linux itself), directly. The files are found on regular /usr/lib location, provided by the container toolkit wrapping the host's drivers. No sketchy, llvmpipe is always software and a GPU device shows up


But why is this important?

If anything goes wrong in this complicated soup of shared libraries, your rendering speeds won't be 290 fps, but 40, 20, 5 fps at maximum, without utilizing GPU

  • The fun thing is that /usr/lib/wsl isn't mapped automatically to Docker on WSL 🤡

  • Getting EGL to work on Cloud Providers can be tricky 🎈


Talk is cheap, show me the code

Thankfully, we have nvidia/opengl:1.2-glvnd-runtime-ubuntu22.04 image to start with


We absolutely need to set those env vars:

ENV NVIDIA_VISIBLE_DEVICES="all"
ENV NVIDIA_DRIVER_CAPABILITIES="all"

Additionally, for ShaderFlow to use EGL, and not GLFW, set

# Can disable with WINDOW_EGL=0 (sends backend=None to Window class)
ENV WINDOW_BACKEND="headless"

# Alternatively, use shaderflow scene class args
scene = ShaderScene(backend="headless")

For pure ModernGL users:

Sending backend="headless" is the same as using the moderngl_window.context.headless.Window class, alongside sending a backend="egl" kwarg to that Window class initialization if $WINDOW_EGL is "1"


Almost done, but there's some CLI args to go:

  • On Any platform, we must add --gpus all to the Docker Engine's CLI for finding GPUs If running from the configured `docker-compose.yml`, this is already configured

  • On Windows, due the d3d12.so lib hack, we must add -v /usr/lib/wsl:/usr/lib/wsl to the Docker Engine's CLI Already configured on `docker-compose.yml`. That makes so we map the WSL's libraries of the Host OS's Drivers to Docker virtualized OS


Checking stuff is working

I've configured a Dockerfile for you to test your setup. Check its output messages:

Terminal
docker-compose run --build glinfo

If everything is nominal until now, you've probably got a healthy setup 🎉

For reference, here's the final Base Dockerfile and docker-compose.yml files

⭐️ Usage

This page helped you?

Consider Joining my Sponsors and helping me continue everything !

All of that..

..was just for saying I've suffered and automated enough, so you can simply run:

Terminal
# Torch CPU already managed 😉
docker-compose run --build depthflow

# Somehow, faster than native linux?
docker-compose run --build shaderflow

Funcionality is limited

You're expeceted to upload your own .py files in a separate Dockerfile (recommended), or edit the ones currently at Docker/Scripts/*.py for your current intentions (anti-pattern)

In the future, there will be $project-gradio runnable images

Your own Dockerfile

You can also build the Docker/base.dockerfile as -t broken-base and base off of it in yours dockerfiles with FROM broken-base:latest locally

  • Not much different from how it works now:
FROM broken-base:latest
CMD ["python3", "Docker/Scripts/depthflow.py"]

This way, no reinstall is required, and you have everything available right away



  1. Untested on AMD Radeon, Intel iGPU, Intel ARC. Your mileage may vary, here be dragons !