tf.reduce_max: Calculate Max Of A Tensor Along An Axis Using TensorFlow
tf.reduce_max - Calculate the max of a TensorFlow tensor along a certain axis of the tensor using the TensorFlow reduce_max operation
< > Code:
You must be a Member to view code
Access all courses and lessons, gain confidence and expertise, and learn how things work and how to use them.
or Log In
First, we import TensorFlow as tf.
import tensorflow as tf
Then we print the TensorFlow version that we are using.
We are using TensorFlow 1.0.1.
For this example, we’ll create a TensorFlow tensor that will hold random integers between 0 and 20 and the data type is going to be tf.int32 and the shape is going to be 2x3x4.
random_int_var = tf.get_variable("random_int_var", initializer=tf.random_uniform([2, 3, 4], minval=0, maxval=20, dtype=tf.int32))
Next, we create the TensorFlow operation that initializes all the global variables in the graph.
init_var = tf.global_variables_initializer()
Then we launch the graph in a session.
sess = tf.Session()
Next, we initialize the variables and now our random_int_var will have been initialized and we can see it in a second.
So to calculate the max of this TensorFlow tensor along a certain axis of the tensor, we’re going to use the tf.reduce_max operation.
To get started, we have to figure out the rank of the tensor.
We do that by running the session and using tf.rank and passing our tensor to it.
We see that it is a rank 3 tensor.
This means we can use the tf.reduce_max operation across three dimensions of our random_int_var tensor.
So we have three different possible axes we can get the max of.
Next, let’s print the random_int_var tensor variable values.
Looking at this tensor, we can manually figure out what the max of each axis is visually.
However, if you have a much larger tensor than this one, it’s better to do it programmatically.
So let’s use the tf.reduce_max operation to figure things out.
To make things simpler for us, rather than assigning the operation to a variable each time and then printing the variable to see the result, we’re just going to print the result of evaluating the tf.reduce_max operation within a session run without assigning it to a variable.
We’ll start with the first axis.
To get the max along the first axis of our random_int_var tensor, we do the following.
We pass our random_int_var tensor to tf.reduce_max operation and we say the reduction_indices, the axis that we want to reduce across and get the max, is that one.
So we print it
And we see 17, 17, 5, 9; 19, 19, 8, 2; 18, 10, 17, 13.
With the reduction indices as 0, we are comparing the first element of the first row of the two interior matrices: 17 versus 3, 17; 17 versus 7, 17; 5 versus 5, 5; 9 versus 0, 9.
We can do the bottom or second row.
We can do the bottom as well: 18 versus 1, 18; 2 versus 10 is 10; 17 versus 11, we get 17; 3 versus 13, we get 13.
So we reduced across the first dimension.
So when we check the shape of this reduction matrix:
Again, we’re using the same functionality we did above, we’re just passing it through the tf.shape operation – we see that it is 3x4.
Again, remember, it is a 2x3x4 tensor so if we’d reduce across the first dimension, we basically remove the 2 so we end up with a 3x4 matrix.
Next, let’s reprint the random_int_var TensorFlow variable values so we can see the values easily.
To get the max along the second axis of our random_int tensor, this time we’re going to pass in the number 1 as opposed to the number 0.
So again, we pass in our tensor to tf.reduce_max and we say the reduction indices is 1 then pass it through our session run and we print the result.
With the reduction indices at 1, we’re comparing the elements in the first column and the first matrix against each other, then the elements in the second column and the second matrix against each other, so on and so forth.
So 17, 1, 18, we get 18.
17, 12, 2, we get 17.
5, 8, 17, we get 17.
9, 2, 3, we get 9.
Down here, we get 3, 19, 1, we get 19.
7, 19, 10, we get 19.
So on and so forth.
We reduced across the second dimension so we can check the shape.
We do the same operation here.
We just pass it through the tf.shape operation.
We see that it is 2x4.
Again, we started out with a tensor that was 2x3x4 so by reducing across the second dimension, we get rid of the 3 so we have a 2x4 matrix.
That is the max that is returned.
Let’s reprint the value so we have them easily accessible.
To get the max along the third and final axis of our random_int_var tensor, we’re going to run the following.
Same as before, we pass in our tensor to tf.reduce_max.
This time, the reduction_indices is 2.
So before, it was 1, now it’s 2.
You’ll notice with all of these indices that because Python is a zero-based index programming language, that this is going to be one less than the actual rank of the tensor.
So our tensor rank was 3, so this is 2.
This is our last axis we’re reducing against.
So with the reduction indices at 2, we’re comparing elements in the first row and the first matrix against each other, then the elements in the second row and the first matrix against each other, so on and so forth.
So we have 17, 17, 5, 9.
The max is 17 which is what we see here.
Then we have 1, 12, 8, 2.
So the max is 12 which is what we get here.
Then the third row is 18, 2, 17, 3.
So the max is 18.
Then for the second matrix, it’s the same thing: 3, 7, 5, 0.
The max is 7 which is what we get.
19, 19, 7, 0.
The max is 19 which is what we get.
1, 10, 11, 13.
The max is 13 which is what we get.
Because we reduced across the third dimension when we check the shape doing it the same way as before:
We pass in our reduce function to the TensorFlow shape, we get 2x3.
We started out with a tensor that was 2x3x4.
So if we reduce across the third dimension, then we would expect the 4 to drop out, so we get a matrix that is 2x3.
And that is how you can calculate the max of a TensorFlow tensor along a certain axis of the tensor using the tf.reduce_max operation.