You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Basically the extension should be called Ultimate WSL2 & WSLg Automasturbator
install RAPIDS should be enough (it will install CUDA)
no Linux Geforce driver! important
use WSLg and DirectGPU to link libcuda.so (should be automatic)
provide simple dropdown menu to select cuda version - ideally run nvidia-smi on Windows, check CUDA version (ie. 12.8) and run RAPIDS install inside Ubuntu WSL with that CUDA 12.8 version
allow to choose RAPIDS install using PIP, Conda or Docker container (recommended)
in case of errors, provide a hint where to download Windows CUDA and drivers and how to set up env PATH in case of version mismatch, but more CUDA versions avail on Win
nitpicking guide to make it running (AI cannot do it)
show how you can run GUI, for example ComfyUI in Docker Linux and render only that window in Windows using WSLg (before X410 project was doing it using Socks streaming from X11)
automatize creating Python venv per project inside container
utilize all VRAM
support Intel NPU
support iGPU (Intel Xe or Arc) together with Geforce RTX
utilize all CPUs (i9 Ultra Core 285K has 24 processors, but WSL always hardcode 32 and that does not work)
let dynamically allocate RAM (for example machine have 128GB, start with 50% - 64GB)
know how to auto update RAPIDS (will ask for user permission each update because it can break everything) + update PyTorch CUDA in each venv created inside container
can provide NANOTRIK.AI custom OSS OpenAI gateway API including OpenVINO, vLLM or SGLang, LMStudio, Ollama, etc (contact me)
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Basically the extension should be called
Ultimate WSL2 & WSLg Automasturbatorinstall RAPIDS should be enough (it will install CUDA)
no Linux Geforce driver! important
use WSLg and DirectGPU to link libcuda.so (should be automatic)
provide simple dropdown menu to select cuda version - ideally run nvidia-smi on Windows, check CUDA version (ie. 12.8) and run RAPIDS install inside Ubuntu WSL with that CUDA 12.8 version
allow to choose RAPIDS install using PIP, Conda or Docker container (recommended)
in case of errors, provide a hint where to download Windows CUDA and drivers and how to set up env PATH in case of version mismatch, but more CUDA versions avail on Win
nitpicking guide to make it running (AI cannot do it)
show how you can run GUI, for example ComfyUI in Docker Linux and render only that window in Windows using WSLg (before X410 project was doing it using Socks streaming from X11)
automatize creating Python venv per project inside container
utilize all VRAM
support Intel NPU
support iGPU (Intel Xe or Arc) together with Geforce RTX
utilize all CPUs (i9 Ultra Core 285K has 24 processors, but WSL always hardcode 32 and that does not work)
let dynamically allocate RAM (for example machine have 128GB, start with 50% - 64GB)
know how to auto update RAPIDS (will ask for user permission each update because it can break everything) + update PyTorch CUDA in each venv created inside container
can provide NANOTRIK.AI custom OSS OpenAI gateway API including OpenVINO, vLLM or SGLang, LMStudio, Ollama, etc (contact me)
Beta Was this translation helpful? Give feedback.
All reactions