You ask — we answer!

CUDA® in WSL

The utilization of application containerization has become a favored approach for workloads management. Although typical applications are straightforward to package and operate, applications utilizing GPUs require special consideration due to the additional abstraction layer: WSL (Windows Subsystem for Linux). Microsoft has custom-designed this Linux kernel to integrate tightly with the Windows Server, ensuring seamless application operation across both systems.

A common question is the correct sequence for installing necessary drivers and libraries that enable GPU computing features. To save your time and facilitate this process, we’ve prepared this step-by-step guide.

The appropriate sequence is to install WSL and Linux within the WSL. Next, install the GPU drivers for Windows Server and the NVIDIA® CUDA® Toolkit in Linux. If desired, Docker Desktop can be added subsequently to run any containers enabled for GPU.

Before proceeding, ensure that all necessary updates are installed on your system.

Install WSL and Ubuntu

To get started, select Start > PowerShell and run it with administrative privileges. The following command enables WSL:

Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Windows-Subsystem-Linux

Once the installation process has finished, you should restart the operating system as usual and reopen PowerShell. Let’s update WSL kernel:

wsl --update

And install necessary Linux distribution, for example Ubuntu 22.04 LTS:

wsl --install -d Ubuntu-22.04

The new Ubuntu 22.04 application will be visible in the Start menu. Upon clicking it, the system will provide a terminal window running an instance of Linux Ubuntu. You need to update cache and upgrade all available packages to the latest versions:

sudo apt update && sudo apt -y upgrade

Next step is installing Python's package manager and NVIDIA® CUDA® Toolkit. It’s important to remember that there’s no need to install GPU drivers into this Linux kernel:

sudo apt -y install python3-pip nvidia-cuda-toolkit

To ensure the proper functioning of all scripts included in the NVIDIA® CUDA® Toolkit, it’s necessary to predefine their path in the $PATH variable. You can achieve this by opening the following file:

nano ~/.bashrc

And append the following string at the end:

export PATH=/home/usergpu/.local/bin${PATH:+:${PATH}}

Save the file and exit from the Ubuntu terminal.

Install NVIDIA® drivers

Please follow our instructions in this article: Install NVIDIA® drivers in Windows. The outcome will appear as follows:

Device manager

Install PyTorch

Open Ubuntu 22.04 terminal and type the following command to install PyTorch with CUDA® support:

pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118

Test in Python

Enter to the Python’s interactive console:

python3
Python 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.

And type these commands sequentially:

import torch
torch.cuda.device_count()
5
torch.cuda.is_available()
True

First command will import the PyTorch framework. Second command will show the number of CUDA-compatible devices in a system. Third command indicates the availability of using CUDA®.

Test in Docker

Start with installing Docker Desktop. The installer could be found here. Reboot the server and run Docker Desktop. This action starts the Docker engine. Then open Ubuntu 22.04 console from the Start menu and type the following command:

docker run --rm -it --gpus=all nvcr.io/nvidia/k8s/cuda-sample:nbody nbody -gpu -benchmark -numdevices=5

where -numdevices is the number of installed GPUs. For this example we test on a dedicated server with 5 GPUs.

If you get an error “Error: only 0 Devices available, 5 requested. Exiting.” keep calm. This is an NVIDIA® bug and you can easily fix it by re-enabling each GPU in Device Manager. Right click on a Start menu and select Device Manager. Expand the Display adapters list, select each GPU and disable it from the Action menu. After that Enable each GPU in the same way. Now this command will work perfectly:

Run an example in WSL

Conclusion

This method allows you to launch nearly any application, though there may be some limitations based on system specifications. Despite these limitations, deployment is typically seamless and you can enjoy all the advantages that Linux offers, but within a Windows Server environment.

See also:



Updated: 28.03.2025

Published: 28.06.2024


Still have questions? Write to us!

By clicking «I Accept» you confirm that you have read and accepted the website Terms and Conditions, Privacy Policy, and Moneyback Policy.