Runtime Examples
This is a user topic, not a separate product area. robo.nix is the file you edit when a project needs more native tools or runtime libraries inside robo shell.
Components
python-uv
CPython from nixpkgs-python plus uv, including the CPython shared library path. This is always present in generated projects.
native-build
C/C++ build tools plus native runtime libraries such as libstdc++, zlib, and legacy libcrypt.
linux-headers
Linux input/kernel headers for native packages such as evdev.
desktop-gl
OpenGL, EGL, GLVND, Vulkan loader, Wayland, X11, GLFW windowing, GLU, and legacy Xt client libraries.
qt6
Qt6 CMake packages, tools such as qtpaths6, plugins, and runtime libraries for Qt services and viewers.
cuda-toolkit
Nix-managed CUDA compiler, headers, and CUDA build/link surface.
Example: native input package
Use this when a Python package such as evdev builds against Linux input headers:
{
components = [
"python-uv"
"native-build"
"linux-headers"
];
extraPackages = pkgs: [
];
extraRuntimeLibraries = pkgs: [
];
}Example: simulator or desktop window
{
components = [
"python-uv"
"native-build"
"desktop-gl"
];
extraPackages = pkgs: [
];
extraRuntimeLibraries = pkgs: [
];
}desktop-gl covers the common GLFW Linux windowing path, including Wayland, X11, Vulkan loader, GLVND, EGL, libxkbcommon, GLU, and legacy Xt client libraries used by larger simulator stacks. It is application/runtime support, not a GPU driver selector.
Example: Qt service or viewer
Use qt6 when a vendor service or local CMake project needs Qt6 packages such as Qt6::Core, Qt6::Network, or Qt6::Core5Compat:
{
components = [
"python-uv"
"native-build"
"desktop-gl"
"qt6"
];
}Host graphics wrapper selection is separate from desktop-gl. By default, hostGraphics = "auto"; uses /run/opengl-driver on NixOS hosts and the generic robo-provided nixGL wrapper on other Linux hosts. If a simulator must use the NVIDIA nixGL wrapper, set:
{
components = [
"python-uv"
"native-build"
"desktop-gl"
"cuda-toolkit"
];
hostGraphics = "nixgl-nvidia";
}Leave hostGraphics = null; when the project should not import a host graphics wrapper. With desktop-gl, the Nix-managed client libraries still apply; they do not select a GPU driver.
If a project already works correctly under a generic nixGL wrapper, set hostGraphics = "nixgl"; explicitly. In that mode, robo keeps using the Nix-managed Python and runtime libraries from the project shell, then imports only graphics-related variables from the selected nixGL wrapper. robo-nix provides nixGL through its own flake inputs, so users do not need to install nixGL in their profile for normal use. Use hostGraphics = "nixgl-nvidia"; only when the project must use the NVIDIA nixGL wrapper and should fail rather than falling back to a Mesa wrapper. robo detects the host NVIDIA driver version with nvidia-smi or /proc/driver/nvidia/version; set ROBO_NIX_NVIDIA_VERSION only when those host probes are unavailable.
Example: CUDA extension build
Use cuda-toolkit when a Python package builds native CUDA extensions. The host NVIDIA driver is still outside the Nix-managed toolkit:
{
components = [
"python-uv"
"native-build"
"cuda-toolkit"
];
extraPackages = pkgs: [
];
extraRuntimeLibraries = pkgs: [
];
}When a project appears to need host libcuda.so.1, robo shell and robo run try to bridge a visible host driver library automatically. The probe checks ROBO_NIX_LIBCUDA_PATH, LD_LIBRARY_PATH, ldconfig -p, and known host driver locations used by common Linux and NixOS driver installs.
Inside robo shell, UV_PYTHON points at the Nix-managed CPython so uv creates project environments from the runtime interpreter. For ad hoc installs, uv pip install ... targets $UV_PROJECT_ENVIRONMENT/bin/python automatically when that environment exists, unless you pass an explicit target such as --python, --active, --system, --target, or --prefix.
You do not need to run source .venv/bin/activate inside robo shell. If a copied setup command does activate the venv anyway, the virtualenv marker may appear normally and the [robo] prompt marker stays single.
Override the detected library explicitly when the driver lives elsewhere:
export ROBO_NIX_LIBCUDA_PATH=/path/to/libcuda.so.1
robo shellYou may also set ROBO_NIX_LIBCUDA_PATH to a directory containing libcuda.so.1. Disable automatic host CUDA bridging with:
export ROBO_NIX_DISABLE_HOST_CUDA_AUTO=1Useful host checks:
nvidia-smi
ldconfig -p | grep libcuda.so.1Expected driver-boundary failures include libcuda.so.1: cannot open shared object file, CUDA driver version is insufficient, and CUDA driver API errors from packages such as Triton or CUDA Python. Nix can provide CUDA build tools, but the NVIDIA kernel driver and libcuda.so.1 still come from the host.
If the Nix CUDA toolkit root needs to come from a local driver/toolkit install, set ROBO_NIX_CUDA_ROOT=/path/to/cuda. This changes the toolkit path exposed by the cuda-toolkit component; it does not make robo own the host kernel driver.
Adding a missing shared library
When a Python extension reports a missing shared library, search for the Nix package that provides it:
robo search libassimp.sorobo search only prints candidates and a snippet. You still choose the package and edit robo.nix yourself:
{
components = [
"python-uv"
"native-build"
];
extraPackages = pkgs: [
];
extraRuntimeLibraries = pkgs: [
pkgs.assimp
];
}Environment variables
Public environment knobs are intentionally small:
| Variable | Purpose |
|---|---|
ROBO_NIX_SHELL | Override the interactive shell launched by robo shell. |
ROBO_NIX_DEBUG | Print debug lines and use plain progress rendering. |
ROBO_NIX_NO_SPINNER | Disable spinner/progress tree rendering. |
ROBO_NIX_LIBCUDA_PATH | Explicit host libcuda.so.1 file or containing directory. |
ROBO_NIX_DISABLE_HOST_CUDA_AUTO | Disable automatic host CUDA bridge probing. |
ROBO_NIX_CUDA_ROOT | Override the CUDA toolkit root exported by cuda-toolkit. |
ROBO_NIX_NIXGL | Override the nixGL wrapper path selected by hostGraphics = "auto";, "nixgl";, or "nixgl-nvidia";. |
ROBO_NIX_NVIDIA_VERSION | Override the detected host NVIDIA driver version used by hostGraphics = "nixgl-nvidia";. |
ROBO_NIX_LOCK_TIMEOUT | Seconds to wait for robo-owned .robo-nix/*.lock files. |
ROBO_NIX_DEFAULT_SOURCE_URL | Override the generated flake input URL for local development. |
When native-build is selected, the shell also exports ROBO_NIX_LIBC_DEV as the active compiler libc development prefix for build scripts that need to inspect it.
Values that affect runtime construction, such as CUDA driver/toolkit paths, are part of the active shell freshness key. Existing robo shell sessions refresh at the next prompt when those inputs change. The key also follows common local .nix imports from robo.nix and the project flake, so splitting runtime libraries or component lists into helper Nix files keeps refresh behavior truthful. When Nix reports evaluated local Nix files during a successful setup, robo records those safe relative paths under .robo-nix/ and folds them into later refresh/cache keys.
After a successful setup, robo caches the captured runtime shell environment by that same key. Later robo shell and robo run attempts can reuse it instantly as long as the referenced Nix store paths still exist.
Run robo refresh when you want to clear robo-owned local runtime state under .robo-nix/. Inside an active robo shell, this requests a refresh through the prompt hook; the shell updates at the next prompt. Outside a shell, the next robo shell or robo run rebuilds the runtime cache.