An Introduction to Tensors for Deep Learning

Harin Ramesh
2 min readJan 25, 2019

--

Tensors are the primary data structure used in deep learning, inputs, outputs everything within a neural network is represented using Tensors.

So what the heck is a Tensor?

The Deep Learning Book says as follows.

"In the general case, an array of numbers arranged on a regular grid with a variable number of axes is known as a tensor."

A tensor is a multidimensional array, ie an nd-array. A number is a zero-dimensional Tensor, a vector is a one-dimensional Tensor and an n-dimensional array is an n-dimensional Tensor. A Tensor is a generalization whereas number, vector, etc. are specific cases of a Tensor.

Let’s look at some terms related to Tensor.

The shape of a tensor

The shape of a tensor represents the length of each dimension.

x = [
[2,1,6],
[2,8,7],
[9,9,1]
]

For the above tensor, the shape is (3,3) since it has 2 dimensions and of length 3.

The rank of a tensor

The rank of a tensor is the number of dimensions of that tensor.

For the above example, the rank will 2.

A tensor of rank zero is a scalar, a zero-dimensional Tensor. A tensor of rank one is a vector, a one-dimensional Tensor.

Note

Rank will be equal to the number of indices needed to accesses an element in a tensor.

x = [
[2,3],
[4,1]
]
x[2][2] => 4

For the above example, rank is 2. So two indices are needed to access each element.

So that’s all about the basics of a tensor, I hope you enjoyed it. If you’ve made it this far and found any errors in any of the above or can think of any ways to make it clearer for future readers, don’t hesitate to drop a comment. Thanks!

--

--

Harin Ramesh
Harin Ramesh

Written by Harin Ramesh

Software Engineering | Distributed Systems | Databases

No responses yet