The fundamental difference between PyTorch Tensor and Numpy Array is that Tensors are created for deep learning, which enables your computations to run on GPUs, significantly speeding up training and inference. In contrast, the Numpy array is highly efficient for CPU-based numerical computations.
Converting from the NumPy array to the PyTorch tensor is necessary to take advantage of GPU acceleration and PyTorch’s automatic differentiation features, which are used for efficient deep learning.
Here are two efficient ways to convert an array to a tensor:
- Using torch.from_numpy(): share data
- Using torch.tensor(): copy data
Method 1: Using torch.from_numpy()

The torch.from_numpy() method accepts a numpy array as an argument and returns a tensor. It creates a tensor that shares memory with the NumPy array, so changes to the NumPy array affect the tensor and vice versa.
If you want a separate tensor, you must explicitly create a copy. If memory sharing is a concern, such as when dealing with large data, the from_numpy() method is more efficient.
import torch import numpy as np numpy_array = np.array([1, 2, 3]) print(numpy_array) # Output: [1 2 3] print(type(numpy_array)) # Output: <class 'numpy.ndarray'> tensor = torch.from_numpy(numpy_array) print(tensor) # Output: tensor([1, 2, 3]) print(type(tensor)) # Output: <class 'torch.Tensor'>The above code effortlessly converts a ndarray object to a tensor object.
Let’s change the element of a numpy array and see if it reflects on a tensor as well:
import torch import numpy as np numpy_array = np.array([1, 2, 3]) print(numpy_array) # Output: [1 2 3] tensor = torch.from_numpy(numpy_array) print(tensor) # Output: tensor([1, 2, 3]) # Changing the array's first element numpy_array[0] = 5 # Changed are being reflected on tensor as well print(tensor) # Output: tensor([5, 2, 3])
You can see that I changed the first element of the numpy array from 1 to 5. Without doing anything, the tensor also made changes and updated the first element of a tensor because they both share the same memory.
Handling Data Types
By default, the from_numpy() method retains the original dtype. If you want explicit casting, you can use the functions like .float(), .double(), or dtype= to control tensor types.
import torch import numpy as np numpy_array = np.array([1, 2, 3]) print(numpy_array) # Output: [1 2 3] print(numpy_array.dtype) # Output: int64 # Converting to float32 tensor tensor = torch.from_numpy(numpy_array).float() print(tensor) # Output: tensor([1., 2., 3.]) print(tensor.dtype) # Output: torch.float32Let’s convert the output tensor to dtype=int16.
import torch import numpy as np numpy_array = np.array([1, 2, 3]) print(numpy_array) # Output: [1 2 3] print(numpy_array.dtype) # Output: int64 # Converting to int16 tensor tensor = torch.tensor(numpy_array, dtype=torch.int16) print(tensor) # Output: tensor([1, 2, 3], dtype=torch.int16) print(tensor.dtype) # Output: torch.int16
Device Placement (CPU/GPU)
By default, numpy arrays are created on the CPU, and tensors work on the GPU.
Using the .to() method, you can move from CPU to GPU and vice versa.
device = "cuda" if torch.cuda.is_available() else "CPU" tensor_gpu = torch.from_numpy(numpy_arr).to(device) # Using .to()
Enabling Gradients (requires_grad)
When you convert an array to a tensor, numpy doesn’t track gradients by default, so we need to enable it explicitly during conversion.
import torch import numpy as np numpy_array = np.array([1., 2., 3.]) print(numpy_array) # Output: [1. 2. 3.] # Enabling gradients tensor = torch.from_numpy(numpy_array).requires_grad_(True) print(tensor) # Output: tensor([1., 2., 3.], dtype=torch.float64, requires_grad=True)
Handling Non-Contiguous Arrays
If your current input array is non-contiguous, you first need to convert that non-contiguous array to a contiguous array using the np.ascontiguousarray() method. Then, you can convert using the .from_numpy() method.
import numpy as np import torch # Create a NumPy array (by taking a transposed slice) numpy_arr = np.array([[1, 2, 3], [4, 5, 6]]) # Transpose makes it non-contiguous numpy_transposed = numpy_arr.T print(numpy_transposed) # Output: # [[1 4] # [2 5] # [3 6]] # Convert to contiguous array numpy_contiguous = np.ascontiguousarray(numpy_transposed) # Convert to PyTorch tensor tensor = torch.from_numpy(numpy_contiguous) print(tensor) # Output: # tensor([[1, 4], # [2, 5], # [3, 6]])
Zero-Dimensional Arrays (Scalars)
If your input is a 0D array, a type of ndarray object, we get the 0D tensor after the conversion.import numpy as np import torch numpy_scalar = np.array(5) print(numpy_scalar) # Output: 5 print(type(numpy_scalar)) # Output: <class 'numpy.ndarray'> tensor_scalar = torch.from_numpy(numpy_scalar) print(tensor_scalar) # Output: tensor(5) print(type(tensor_scalar)) # Output: <class 'torch.Tensor'>
If you only need a single value from a tensor, .item() is more efficient than tensor.numpy()[0].
import numpy as np import torch numpy_scalar = np.array(5) print(numpy_scalar) # Output: 5 print(type(numpy_scalar)) # Output: <class 'numpy.ndarray'> tensor_scalar = torch.from_numpy(numpy_scalar) print(tensor_scalar.item()) # Output: 5 print(type(tensor_scalar.item())) # Output: <class 'int'>
Method 2: Using torch.tensor()

The torch.tensor() method converts a NumPy array to a PyTorch tensor. However, it creates a copy of the data. So, torch.tensor() would detach the tensor from the NumPy array, meaning they don’t share memory.
import torch import numpy as np numpy_array = np.array([1, 2, 3]) print(numpy_array) # Output: [1 2 3] print(type(numpy_array)) # Output: <class 'numpy.ndarray'> tensor = torch.tensor(numpy_array) print(tensor) # Output: tensor([1, 2, 3]) print(type(tensor)) # Output: <class 'torch.Tensor'>
If you change the element of a numpy array, the change in the tensor will not be reflected because they won’t share memory.
import torch import numpy as np numpy_array = np.array([1, 2, 3]) print(numpy_array) # Output: [1 2 3] tensor = torch.tensor(numpy_array) print(tensor) # Output: tensor([1, 2, 3]) numpy_array[0] = 5 print(numpy_array) # Output: [5 2 3] print(tensor) # Output: tensor([1, 2, 3])
The above output shows that the tensor remained unchanged even if we changed the array.
That’s all!