Sprint Chase Technologies
  • Home
  • About
    • Why Choose Us
    • Contact Us
    • Team Members
    • Testimonials
  • Services
    • Web Development
    • Web Application Development
    • Mobile Application Development
    • Web Design
    • UI/UX Design
    • Social Media Marketing
    • Projects
  • Blog
    • PyTorch
    • Python
    • JavaScript
  • IT Institute
menu
close

Need Help? Talk to an Expert

+91 8000107255
Sprint Chase Technologies
  • Home
  • About
    • Why Choose Us
    • Contact Us
    • Team Members
    • Testimonials
  • Services
    • Web Development
    • Web Application Development
    • Mobile Application Development
    • Web Design
    • UI/UX Design
    • Social Media Marketing
    • Projects
  • Blog
    • PyTorch
    • Python
    • JavaScript
  • IT Institute

Need Help? Talk to an Expert

+91 8000107255

How to Check CuDNN Version on Linux and Windows

Home How to Check CuDNN Version on Linux and Windows
checking cuDNN version
  • Written by krunallathiya21
  • April 16, 2025
  • 0 Com
PyTorch

Here is a step-by-step guide to check the CuDNN (CUDA Deep Neural Network Library) installation on your system and version.

Step 1: Check the NVIDIA GPU and CUDA version

Ensure you have an NVIDIA GPU and the appropriate version of the CUDA Toolkit installed on your system. You can check the CUDA version by running the following command in the terminal or command prompt.

nvcc --version
Output
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2024 NVIDIA Corporation
Built on Sun_Jan_28_19:07:16_Pacific_Daylight_Time_2024
Cuda compilation tools, release 11.1, V11.1.243

Step 2: Locate the CuDNN files

After installing CuDNN, the library files should be in your CUDA Toolkit installation directory. Locate the CuDNN header and library files in the following directories (assuming the default installation paths):

On Linux:

  1. Header file (cudnn.h): /usr/local/cuda/include
  2. Library files (libcudnn.*): /usr/local/cuda/lib64

On Windows:

  1. Header file (cudnn.h): C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.0\include
  2. Library files (cudnn.lib, cudnn64_X.dll): C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\11.0\lib\x64

Step 3: Check CuDNN with a deep-learning framework

To ensure that CuDNN is working correctly with a deep learning framework such as TensorFlow or PyTorch, you can run a simple test script.

Here’s an example of TensorFlow.

import tensorflow as tf

# Check if GPU is available
if tf.config.list_physical_devices('GPU'):
  print("GPU is available")
  print("CuDNN is enabled: True")
else:
  print("GPU is not available")

In TensorFlow 2.0, the tf.test.is_built_with_cudnn() function is deprecated. However, if you have a GPU available and TensorFlow can recognize it, it’s likely that CuDNN is also enabled. You can directly infer CuDNN availability from GPU availability.

Here is an example of the PyTorch library:
import torch

print(torch.backends.cudnn.version())

# Output: 90300
Checking cudnn version using PyTorch library That’s it!
Post Views: 290
LEAVE A COMMENT Cancel reply
Please Enter Your Comments *

krunallathiya21

All Categories
  • JavaScript
  • Python
  • PyTorch
site logo

Address:  TwinStar, South Block – 1202, 150 Ft Ring Road, Nr. Nana Mauva Circle, Rajkot(360005), Gujarat, India

sprintchasetechnologies@gmail.com

(+91) 8000107255.

ABOUT US
  • About
  • Team Members
  • Testimonials
  • Contact

Copyright by @SprintChase  All Rights Reserved

  • PRIVACY
  • TERMS & CONDITIONS