Calculate The Biased Standard Deviation Of All Elements In A PyTorch Tensor

Calculate the biased standard deviation of all elements in a PyTorch Tensor by using the PyTorch std operation

Type: FREE     Duration: 4:47   Technologies: PyTorch, Python

Page Sections: Video  |  Code  |  Transcript

< > Code:

You must be a Member to view code

Access all courses and lessons, gain confidence and expertise, and learn how things work and how to use them.

Transcript:

First, we import the Python math module.

``````import math
``````

Then we import PyTorch.

``````import torch
``````

We print the PyTorch version we are using.

``````print(torch.__version__)
``````

We are using PyTorch 0.3.1.post2.

Now, let’s manually create the PyTorch tensor we’re going to use in this example.

``````pt_tensor_ex = torch.FloatTensor(
[
[
[36, 36, 36],
[36, 36, 36],
[36, 36, 36]
]
,
[
[72, 72, 72],
[72, 72, 72],
[72, 72, 72]
]
])
``````

We use torch.FloatTensor, we pass in the data structure, and we assign it to the Python variable pt_tensor_ex.

We construct this tensor in this way so that it will be straightforward to calculate the biased standard deviation of the tensor.

Then we print the pt_tensor_ex Python variable to see what we have.

``````print(pt_tensor_ex)
``````

We see that we have one tensor that’s sized 2x3x3 and it’s a PyTorch FloatTensor.

The first matrix is full of 36s, and the second matrix is full of 72s, just like we defined it.

Next, let’s calculate the biased standard deviation of all the elements in the PyTorch tensor by using the torch.std.

``````pt_biased_std_ex = torch.std(pt_tensor_ex, unbiased=False)
``````

We’re passing in our pt_tensor_ex Python variable and we’re going to set the parameter unbiased as False.

That means we’re going to be calculating the biased standard deviation.

We’re assigning that value that’s going to be returned to the Python variable pt_biased_std_ex.

Then we print this variable to see the value.

``````print(pt_biased_std_ex)
``````

We see that we get 18.0.

All right, now that we have our result, let’s double check it manually to make sure we’re comfortable with what was calculated.

First, let’s reacquaint ourselves with how we calculate the biased standard deviation.

First, we calculate the mean, then we subtract the mean from each element, then we square the result, and we do a summation over all of the elements, and then we divide the summation result by N, then we take the square root.

If we were doing the unbiased estimator, then this would be n-1 which is Bessel’s Correction.

But because we’re calculating the biased standard deviation, or the population standard deviation, we divide by N.

When we calculate the biased standard deviation, we are asserting that we are calculating the standard deviation over the whole population, which is why we use N rather than n-1.

So now, let’s check the calculation manually.

We’ll start by calculating the mean of the tensor using torch.mean and passing our pt_tensor_ex variable, and assigning the result to the Python variable pt_tensor_mean_ex.

``````pt_tensor_mean_ex = torch.mean(pt_tensor_ex)
``````

We then print the pt_tensor_mean_ex Python variable:

``````print(pt_tensor_mean_ex)
``````

To see that the value is 54.0.

Now that we have the mean in hand, we want to subtract the mean from every single element in our pt_tensor_ex PyTorch tensor to demean it.

``````pt_tensor_transformed_ex = pt_tensor_ex - pt_tensor_mean_ex
``````

Then we’re going to assign it to the Python variable pt_tensor_transformed_ex.

When we print this variable:

``````print(pt_tensor_transformed_ex)
``````

We see that the first matrix is full of negative 18s, the second matrix is full of positive 18s.

36 minus 54 is negative 18.

72 minus 54 is 18.

Okay, we’re happy with that.

That makes sense.

Then we want to square each element using PyTorch’s pow operation.

``````pt_tensor_squared_transformed_ex = torch.pow(pt_tensor_transformed_ex, 2)
``````

So we pass in our pt_tensor_transformed_ex, and we’re going to raise it to the power of 2, and we assign it to the Python variable pt_tensor_squared_transformed_ex.

We can then print this variable to see what we get.

``````print(pt_tensor_squared_transformed_ex)
``````

We know that 18 squared is 324 and negative 18 squared is 324.

So this makes sense.

Now that each number has been demeaned and squared, let’s sum all of the elements in the tensor to get one value.

``````pt_tensor_squared_transformed_sum_ex = torch.sum(pt_tensor_squared_transformed_ex)
``````

So we use torch.sum, we pass in our pt_tensor_squared_transformed_ex variable, and we’re going to assign it to the Python variable pt_tensor_squared_transformed_sum_ex.

When we print this variable:

``````print(pt_tensor_squared_transformed_sum_ex)
``````

we can see that we get the value 5832.

Because we are using a tensor that is 2x3x3, it means there are 2*3 = 6 * 3 = 18 elements.

Also remembering that we’re doing the population standard deviation or the biased standard deviation, we don’t have to use Bessel’s Correction.

So we’re going to divide this number by 18 and not 18-1.

So we do 5832/18:

``````5832 / 18
``````

To get 324.0.

Lastly, we use the Python math module square root to take this expression and get the square root of it.

``````math.sqrt(5832 / 18)
``````

When we do that, we get 18.0.

Let’s compare it to the biased standard deviation calculation result we computed earlier.

So we print pt_biased_std_ex:

``````print(pt_biased_std_ex)
``````

And we see that it’s 18.0 as well.

Perfect - We were able to calculate the biased standard deviation of all elements in a PyTorch tensor by using the PyTorch tensor std operation.

You might also enjoy these deep learning videos:

Back to PyTorch Tutorial Lesson List