Sprint Chase Technologies
  • Home
  • About
    • Why Choose Us
    • Contact Us
    • Team Members
    • Testimonials
  • Services
    • Web Development
    • Web Application Development
    • Mobile Application Development
    • Web Design
    • UI/UX Design
    • Social Media Marketing
    • Projects
  • Blog
    • PyTorch
    • Python
    • JavaScript
  • IT Institute
menu
close

Need Help? Talk to an Expert

+91 8000107255
Sprint Chase Technologies
  • Home
  • About
    • Why Choose Us
    • Contact Us
    • Team Members
    • Testimonials
  • Services
    • Web Development
    • Web Application Development
    • Mobile Application Development
    • Web Design
    • UI/UX Design
    • Social Media Marketing
    • Projects
  • Blog
    • PyTorch
    • Python
    • JavaScript
  • IT Institute

Need Help? Talk to an Expert

+91 8000107255

torch.var(): Variance of Tensor Elements

Home torch.var(): Variance of Tensor Elements
torch.var() in PyTorch
  • Written by krunallathiya21
  • July 11, 2025
  • 0 Com
PyTorch

The torch.var() method calculates the variance of tensor elements along specified dimensions.

torch.var() Method in R

The question is, what is variance? Well, it is a statistical concept that measures the spread of data points from their mean, calculated as the average of squared deviations from the mean.

Imagine you have a tensor of ages. Now, you are curious how spread out the ages are in this tensor? That’s where variance comes into the picture! It tells you how much the numbers differ from the average (the mean).

If the inputs are far more spread out from the mean, the variance is very high.  If the inputs are narrowly spread out from the mean, the variance is very low.

It supports both sample variance (unbiased, using Bessel’s correction) and population variance (biased).

Here is the basic formula to calculate variance without any method:
variance = Σ((xᵢ - mean)²) / (N - correction)
Where N = number of elements, and correction adjusts degrees of freedom.

Syntax

torch.var(input, dim=None, keepdim=False, unbiased=True, out=None)

Parameters

Argument Description
input (Tensor) It represents an input tensor. 
dim (int or tuple of ints, optional) It is the dimension along which you can calculate variance. If you pass None, it will calculate the variance of all elements. 
keepdim (bool, optional) It keeps reduced dimensions as size 1.
 correction (int, optional) It represents the difference between the sample size and the sample degrees of freedom.

In the previous version of PyTorch, this argument was named as unbiased, but from Pytorch 2.0, this argument is now correction.

By default, it is a Bessel’s correction, which is correction=1.
out (Tensor, optional) For in-place operation, you can store the result in this tensor.

Basic variance calculation

Let’s define two different 1D tensors and find out the sample low variance and sample high variance to know the difference.

import torch

# Low spread (close to average)
tensor_one = torch.tensor([49.0, 50.0, 51.0])

print("Low variance:", torch.var(tensor_one))
# Output: Low variance: tensor(1.)

# High spread (far from average)
tensor_two = torch.tensor([10.0, 50.0, 90.0])

print("High variance:", torch.var(tensor_two))
# Output: High variance: tensor(1600.)

In the case of tensor_one, the variance is 1, as it is incredibly close to the tensor’s mean of 50.

In the case of tensor_two, the values are spread out, resulting in a variance of approximately 1600, as the data is more diverse, noisy, or unpredictable.

Population variance (Unbiased=False)

Population variance (Unbiased=False)

To calculate the population variance, we need to pass correction = 0 or unbiased = FALSE.

import torch

# Low spread (close to average)
tensor_one = torch.tensor([49.0, 50.0, 51.0])

print("Low Population variance:", torch.var(tensor_one, correction=0))
# Output: Low Population variance: tensor(0.6667)

# High spread (far from average)
tensor_two = torch.tensor([10.0, 50.0, 90.0])

print("High Population variance:", torch.var(tensor_two, correction=0))
# Output: High Population variance: tensor(1066.6666)

Variance along a dimension

If you are working with a 2D tensor, you can find the variance along rows or columns.

  1. dim=0 → Reducing along rows (axis 0) → meaning, you are comparing values column-wise.
  2. dim=1 → Reducing along columns (axis 1) → meaning, you are comparing valuese row-wise.

dim=0

Let’s find out the variance of a 2D tensor column-wise:
import torch

tensor_2d = torch.tensor([[1.0, 2.0, 3.0], 
                        [4.0, 5.0, 6.0]])

# Variance along columns (dim=0)
var_col = torch.var(tensor_2d, dim=0)

print(var_col)
# Output: tensor([2.2500, 2.2500, 2.2500])
Here is the explanation:
  1. Column 0: [1.0, 4.0] → mean = 2.5 → var = 2.25
  2. Column 1: [2.0, 5.0] → mean = 3.5 → var = 2.25
  3. Column 2: [3.0, 6.0] → mean = 4.5 → var = 2.25

dim=1

Let’s find out the variance of a 2D tensor row-wise:
import torch

tensor_2d = torch.tensor([[1.0, 2.0, 3.0],
                          [4.0, 5.0, 6.0]])

# Variance along rows (dim=1)
var_row = torch.var(tensor_2d, dim=1)

print(var_row)
# Output: tensor([1., 1.])
Here is the explanation:
  1. Row 0: [1.0, 2.0, 3.0] → mean = 2 → var = 1.
  2. Row 1: [4.0, 5.0, 6.0] → mean = 5 → var = 1.

Empty tensor

variance of empty tensor

What if the input tensor is empty? In that case, the variance will be tensor(nan). Why? There are no values to compute a mean from. And therefore, no deviations from the mean.

import torch

empty_tensor = torch.tensor([])

var_empty = torch.var(empty_tensor)

print(var_empty)

# Output: tensor(nan)
That’s all!
Post Views: 3
LEAVE A COMMENT Cancel reply
Please Enter Your Comments *

krunallathiya21

All Categories
  • JavaScript
  • Python
  • PyTorch
site logo

Address:  TwinStar, South Block – 1202, 150 Ft Ring Road, Nr. Nana Mauva Circle, Rajkot(360005), Gujarat, India

sprintchasetechnologies@gmail.com

(+91) 8000107255.

ABOUT US
  • About
  • Team Members
  • Testimonials
  • Contact

Copyright by @SprintChase  All Rights Reserved

  • PRIVACY
  • TERMS & CONDITIONS