Sprint Chase Technologies
  • Home
  • About
    • Why Choose Us
    • Contact Us
    • Team Members
    • Testimonials
  • Services
    • Web Development
    • Web Application Development
    • Mobile Application Development
    • Web Design
    • UI/UX Design
    • Social Media Marketing
    • Projects
  • Blog
    • PyTorch
    • Python
    • JavaScript
  • IT Institute
menu
close

Need Help? Talk to an Expert

+91 8000107255
Sprint Chase Technologies
  • Home
  • About
    • Why Choose Us
    • Contact Us
    • Team Members
    • Testimonials
  • Services
    • Web Development
    • Web Application Development
    • Mobile Application Development
    • Web Design
    • UI/UX Design
    • Social Media Marketing
    • Projects
  • Blog
    • PyTorch
    • Python
    • JavaScript
  • IT Institute

Need Help? Talk to an Expert

+91 8000107255

torch.cuda.is_available(): Checking If a CUDA-enabled GPU is Available

Home torch.cuda.is_available(): Checking If a CUDA-enabled GPU is Available
torch.cuda.is_available() Method in PyTorch
  • Written by krunallathiya21
  • July 31, 2025
  • 0 Com
PyTorch

The torch.cuda.is_available() method checks if a CUDA-enabled GPU is available or not. It returns True if a compatible CUDA-enabled GPU is detected and False otherwise.

torch.cuda.is_available()

You can use it while toggling between CPU and GPU devices in a PyTorch workflow.

Syntax

torch.cuda.is_available()

Checking CUDA availability

I am using a T4 GPU. Let’s check its availability.
import torch

print(torch.cuda.is_available())
Checking CUDA availability We got True, which means we are connected to a CUDA-enabled GPU. If we are running the above code on the CPU, it returns False.
import torch

print(torch.cuda.is_available())
torch.cuda.is_available() returns False

Printing device name based on availability

You can use the torch.device() function to print the device name for clarity in an ML project.
import torch

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")

print(f"Using device: {device}")

# Output: Using device: cuda
Checking CUDA availability using PyTorch

Since I am using a CUDA-enabled GPU, torch.cuda.is_available() method returns True.

What if we are working on the CPU? Let’s find out.

import torch

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")

print(f"Using device: {device}")

# Output: Using device: cpu

As we can see, I am running on my Laptop, which is why it’s showing CPU.

Multi-GPU Check

For distributed training, we might need multiple GPUs. Let’s count the number of GPUs and check if it is CUDA-enabled.

import torch

if torch.cuda.is_available():
    print(f"CUDA is available! Number of GPUs: {torch.cuda.device_count()}")
    for i in range(torch.cuda.device_count()):
        print(f"GPU {i}: {torch.cuda.get_device_name(i)}")
else:
    print("CUDA is not available. Falling back to CPU.")
Multi-GPU Check That’s all!
Post Views: 3
LEAVE A COMMENT Cancel reply
Please Enter Your Comments *

krunallathiya21

All Categories
  • JavaScript
  • Python
  • PyTorch
site logo

Address:  TwinStar, South Block – 1202, 150 Ft Ring Road, Nr. Nana Mauva Circle, Rajkot(360005), Gujarat, India

sprintchasetechnologies@gmail.com

(+91) 8000107255.

ABOUT US
  • About
  • Team Members
  • Testimonials
  • Contact

Copyright by @SprintChase  All Rights Reserved

  • PRIVACY
  • TERMS & CONDITIONS