Skip to content

Conversation

@bhavishadawada
Copy link
Contributor

Added x86_64 NVIDIA GPU support to resolve CUDA runtime library errors: New Docker Compose Profile: Added backend_x86_cuda service for x86_64 systems with NVIDIA GPUs. Mounts CUDA libraries from host system (/usr/local/cuda, /usr/lib/x86_64-linux-gnu). Sets CUDA environment variables (CUDA_HOME, LD_LIBRARY_PATH). Enables NVIDIA GPU device access for Docker containers

Enhanced Hardware Detection: Updated get_nvidia_libs_versions.sh to automatically detect x86_64 + CUDA systems

Added logic to set DOCKER_PROFILE='x86_cuda' for x86_64 systems with GPU libraries

Maintains existing detection for Jetson (tegra) and CPU-only (generic) systems

Documentation: Added clear comments to Docker Compose services explaining target hardware:

backend_tegra_gpu_enabled: NVIDIA Jetson devices (Xavier, Orin) - ARM64 with Tegra GPU

backend_generic: CPU-only systems - no GPU acceleration

backend_x86_cuda: x86_64 systems with NVIDIA GPU (Tesla T4, RTX, etc.)

Problem Solved: Compiled models expecting CUDA 10.2 runtime libraries now have access to host system CUDA installation, eliminating "libcudart.so.10.2: cannot open shared object file" errors.

Testing: Automatic profile selection ensures correct configuration based on detected hardware without manual intervention.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant