Here is a step-by-step guide to check the CuDNN (CUDA Deep Neural Network Library) installation on your system and version.
Step 1: Check the NVIDIA GPU and CUDA version
Ensure you have an NVIDIA GPU and the appropriate version of the CUDA Toolkit installed on your system. You can check the CUDA version by running the following command in the terminal or command prompt.
nvcc --versionOutput
nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2024 NVIDIA Corporation Built on Sun_Jan_28_19:07:16_Pacific_Daylight_Time_2024 Cuda compilation tools, release 11.1, V11.1.243
Step 2: Locate the CuDNN files
After installing CuDNN, the library files should be in your CUDA Toolkit installation directory. Locate the CuDNN header and library files in the following directories (assuming the default installation paths):
On Linux:
- Header file (cudnn.h): /usr/local/cuda/include
- Library files (libcudnn.*): /usr/local/cuda/lib64
On Windows:
- Header file (cudnn.h): C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.0\include
- Library files (cudnn.lib, cudnn64_X.dll): C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\11.0\lib\x64
Step 3: Check CuDNN with a deep-learning framework
To ensure that CuDNN is working correctly with a deep learning framework such as TensorFlow or PyTorch, you can run a simple test script.
Here’s an example of TensorFlow.
import tensorflow as tf # Check if GPU is available if tf.config.list_physical_devices('GPU'): print("GPU is available") print("CuDNN is enabled: True") else: print("GPU is not available")
In TensorFlow 2.0, the tf.test.is_built_with_cudnn() function is deprecated. However, if you have a GPU available and TensorFlow can recognize it, it’s likely that CuDNN is also enabled. You can directly infer CuDNN availability from GPU availability.
Here is an example of the PyTorch library:import torch print(torch.backends.cudnn.version()) # Output: 90300
