GPU accelerated Jupyter Notebook

Jupyter Notebook is a modern development environment widely used for machine learning and computer science. This software allows you to run Python code directly from a web interface and immediately see the results. However, there is one issue: the code will run on the CPU. If you want it to work on the GPU, you need to create a separate virtual environment. In this guide, we’ll show you how to do this.
Install Anaconda
Start by installing the Anaconda Python distribution, which contains tools for managing virtual environments. Download the shell script:
wget https://repo.anaconda.com/archive/Anaconda3-2023.09-0-Linux-x86_64.sh
Make this script executable:
chmod a+x Anaconda3-2023.09-0-Linux-x86_64.sh
Run the installation:
./Anaconda3-2023.09-0-Linux-x86_64.sh
During the process, the system will ask you to read the user agreement and clarify some installation details.
Install CUDA®
The next step is to install the latest version of the NVIDIA® CUDA® Toolkit. You can obtain additional information by visiting our step-by-step guide Install CUDA® toolkit in Linux. The easiest way to do this is to execute the following commands. Get a pin file for the CUDA® repository:
wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/cuda-ubuntu2204.pin
Place this file in a standard apt configuration directory:
sudo mv cuda-ubuntu2204.pin /etc/apt/preferences.d/cuda-repository-pin-600
Download a local copy of the CUDA® repository as a single DEB package:
wget https://developer.download.nvidia.com/compute/cuda/12.3.2/local_installers/cuda-repo-ubuntu2204-12-3-local_12.3.2-545.23.08-1_amd64.deb
Install the downloaded package:
sudo dpkg -i cuda-repo-ubuntu2204-12-3-local_12.3.2-545.23.08-1_amd64.deb
Create a GPG keypair to work with a local CUDA® repository:
sudo cp /var/cuda-repo-ubuntu2204-12-3-local/cuda-*-keyring.gpg /usr/share/keyrings/
Update the packages cache:
sudo apt-get update
Install the CUDA® toolkit using the standard apt manager:
sudo apt-get -y install cuda-toolkit-12-3
Reboot the server:
sudo shutdown -r now
Reconnect to the SSH session with port forwarding. You need to forward port 8888 to localhost 127.0.0.1:8888. For additional information please look at this article.
After rebooting, create a separate virtual environment for GPU computing tasks:
conda create --name gpu_env python==3.8
Let’s install additional packages:
conda install -c anaconda tensorflow-gpu keras-gpu
Be patient; this can take up to 30 minutes. Now, we are ready to add the created environment to the available list.
python -m ipykernel install --user --name gpu_env --display-name "Python (GPU)"
Install torch with CUDA® support. These packages needed to run the code examples:
pip3 install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu118
Run Jupyter Notebook
You can run this software with just one command:
(base) $ jupyter notebook
Open the web interface using the displayed link and token:
http://127.0.0.1:8888/?token=[put_your_own_token_from_console]

Test your installation with a small code that checks CUDA® availability:
import torch
torch.cuda.is_available()
If everything is OK, you’ll get a True value after running. You can also display all NVIDIA® GPUs presented:
import subprocess
def get_gpu_info():
try:
return subprocess.check_output("nvidia-smi --query-gpu=gpu_name --format=csv,noheader", shell=True).decode('utf-8')
except Exception as e:
print(f"Error: {str(e)}")
return None
print(get_gpu_info())

See also:
Updated: 28.03.2025
Published: 11.07.2024