pytorch gpu nvidia
Updated Sun, 10 Jul 2022 09:45:19 GMT

PyTorch can't see GPU (torch.cuda.is_availble() returns False)

I have a problem where

import torch

will print False, and I can't use the GPU available. I've tried it on conda environment, where I've installed the PyTorch version corresponding to the NVIDIA driver I have. I've also tried it in docker container, where I've done the same. I've tried both of these options on a remote server, but they both failed. I know that I've installed the correct driver versions because I've checked the version with nvcc --version before installing PyTorch, and I've checked the GPU connection with nvidia-smi which displays the GPUs on the machines correctly.

Also, I've checked this post and tried exporting CUDA_VISIBLE_DEVICES, but had no luck.

On the server I have NVIDIA V100 GPUs with CUDA version 10.0 (for conda environment) and version 10.2 on a docker container I've built. Any help or push in the right direction would be greatly appreciated. Thanks!


For anyone else having this problem, it turned out my server manager has not updated the drivers for the server.

I switched to a different server, installed anaconda and things started working like it should, i.e., torch.cuda.is_available() returns True after setting up a fresh environment.

Linked Articles

Local articles referenced by this article: