Published: 3/24/2026 Installing Conda and PyTorch on NVIDIA DGX Spark Conda PyTorch Python When you install a GPU-enabled version of PyTorch via Conda, Conda manages the necessary NVIDIA CUDA toolkit and libraries internally, without requiring a complex, system-wide CUDA installation. This significantly simplifies the process of getting PyTorch to run on your GPU for faster computations. Hardware GPU: GB10 Grace Blackwell Superchip CPU: ARM-based Grace (aarch64 architecture)OS: Ubuntu 24.04CUDA: 13.0.2 NVIDIAStep 1 – Conda Installationhttps://conda-forge.org/download/sudo mkdir -p $HOME/codebase/conda sudo mkdir -p /usr/local/miniforge3 sudo chown $USER:$USER /usr/local/miniforge3 cd $HOME/codebase/conda curl -LO https://github.com/conda-forge/miniforge/releases/latest/download/Miniforge3-Linux-aarch64.sh bash Miniforge3-$(uname)-$(uname -m).sh -b -u -p /usr/local/miniforge3Step 2 – Update your profilevi $HOME/.profile # Add to Extend PATH PATH=$PATH:/usr/local/miniforge3/binExit and log back inStep 3 – Activate Conda# # To activate this environment, use # conda activate ai # # To deactivate an active environment, use # conda deactivate # Run: conda init conda create -n ai python=3.12 conda activate ai Step 4 – Install PyTorch https://pytorch.org/get-started/locally/ # Instead of using the suggested pip instalaltion line I use: python -m pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu130 # In case of issues here is the un-install command: # pip uninstall torch torchvision torchaudio -y Step 5 – Validate the installation works# Example One line validation # Run the code: python -c "import torch; print(torch.__version__); print(torch.cuda.is_available())" # Example output: # $ python -c "import torch; print(torch.__version__); print(torch.cuda.is_available())" # $ 2.11.0+cu130 # $ True # Example Code validation sudo mkdir -p $HOME/codebase/bin/ echo " import torch print(\"PyTorch version:\", torch.__version__) if torch.cuda.is_available(): print(\"GPU:\", torch.cuda.get_device_name(0)) else: print(\"No CUDA GPU detected\") " > $HOME/codebase/bin/cuda.ok.py # Run the code: python $HOME/codebase/bin/cuda.ok.py # Example output: # $ python $HOME/codebase/bin/cuda.ok.py # $ PyTorch version: 2.11.0+cu130 # $ GPU: NVIDIA GB10