## Tensorflow: How to Pool over Depth?

I have the following parameters defined for doing a max pool over the depth of the image (rgb) for compression before the dense layer and readout...and I am failing with an error that I cannot pool over depth and everything else:

sunset_poolmax_1x1x3_div_2x2x3_params = \ {'pool_function':tf.nn.max_pool, 'ksize':[1,1,1,3], 'strides':[1,1,1,3], 'padding': 'SAME'}

I changed the strides to `[1,1,1,3]`

so that depth is the only dimension reduced by the pool...but it still doesn't work. I can't get good results with the tiny image I have to compress everything to in order to keep the colors...

Actual Error:

ValueError: Current implementation does not support pooling in the batch and depth dimensions.

tf.nn.max_pool does not support pooling over the depth dimension which is why you get an error.

You can use a max reduction instead to achieve what you're looking for:

`tf.reduce_max(input_tensor, reduction_indices=[3], keep_dims=True)`

The `keep_dims`

parameter above ensures that the rank of the tensor is preserved. This ensures that the behavior of the max reduction will be consistent with what the tf.nn.max_pool operation would do if it supported pooling over the depth dimension.

**tf.nn.depth_to_space,** TensorFlow 1 version, View source on GitHub For an input tensor with larger depth, here of shape [1, 1, 1, 12] , e.g.. x = [[[[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]]]]. Performs the max pooling on the input. data_format A string. Specifies the channel dimension. For N=1 it can be either "NWC" (default) or "NCW", for N=2 it can be either "NHWC" (default) or "NCHW" and for N=3 either "NDHWC" (default) or "NCDHW

Here is a brief example to the original question for tensorflow. I tested it on a stock RGB image of size `225 x 225`

with 3 channels.

Import the standard libraries, enable `eager_execution`

to quickly view results

import tensorflow as tf from scipy.misc import imread import matplotlib.pyplot as plt import numpy as np tf.enable_eager_execution()

Read image and cast from `uint8`

to `tf.float32`

x = tf.cast(imread('tiger.jpeg'), tf.float32) x = tf.reshape(x, shape=[-1, x.shape[0], x.shape[1], x.shape[2]]) print(x.shape) input_channels = x.shape[3]

Create the filter for depthwise convolution

filters = tf.contrib.eager.Variable(tf.random_normal(shape=[3, 3, input_channels, 4])) print(x.shape)

Perform depthwise convolution with `channel multiplier`

4. Note the the padding has been kept to `'SAME'`

. It can be changed at will.

x = tf.nn.depthwise_conv2d(input=x, filter=filters, strides=[1, 1, 1, 1], padding='SAME', name='conv_1') print(x.shape)

Perform the `max_pooling2d`

. Since the output of the pooling layer is `(input_size - pool_size + 2 * padding)/stride + 1`

and the padding is `'valid'`

, we should get an output of `(225 - 2 + 0)/1 + 1 = 223`

.

x = tf.layers.max_pooling2d(inputs=x, pool_size=2, strides=1,padding='valid', name='maxpool1') print(x.shape)

Plot the figures to confirm.

fig, ax = plt.subplots(nrows=4, ncols=3) q = 0 for ii in range(4): for jj in range(3): ax[ii, jj].imshow(np.squeeze(x[:,:,:,q])) ax[ii,jj].set_axis_off() q += 1 plt.tight_layout() plt.show()

**tf.nn.space_to_depth,** TensorFlow 1 version, View source on GitHub For an input tensor with larger depth, here of shape [1, 2, 2, 3] , e.g.. x = [[[[1, 2, 3], [4, 5, 6]], [[7, 8, 9], [10, 11, 12]]]] . Pre-trained models and datasets built by Google and the community

TensorFlow now supports depth-wise max pooling with `tf.nn.max_pool()`

. For example, here is how to implement it using pooling kernel size 3, stride 3 and VALID padding:

import tensorflow as tf output = tf.nn.max_pool(images, ksize=(1, 1, 1, 3), strides=(1, 1, 1, 3), padding="VALID")

You can use this in a Keras model by wrapping it in a `Lambda`

layer:

from tensorflow import keras depth_pool = keras.layers.Lambda( lambda X: tf.nn.max_pool(X, ksize=(1, 1, 1, 3), strides=(1, 1, 1, 3), padding="VALID")) model = keras.models.Sequential([ ..., # other layers depth_pool, ... # other layers ])

Alternatively, you can write a custom Keras layer:

class DepthMaxPool(keras.layers.Layer): def __init__(self, pool_size, strides=None, padding="VALID", **kwargs): super().__init__(**kwargs) if strides is None: strides = pool_size self.pool_size = pool_size self.strides = strides self.padding = padding def call(self, inputs): return tf.nn.max_pool(inputs, ksize=(1, 1, 1, self.pool_size), strides=(1, 1, 1, self.pool_size), padding=self.padding)

You can then use it like any other layer:

model = keras.models.Sequential([ ..., # other layers DepthMaxPool(3), ... # other layers ])

**tf.nn.pool,** Pooling happens over the spatial dimensions only. window_shape, Sequence of N ints >= 1. pooling_type, Specifies pooling operation, must be "AVG" or� View license def max_pool_2x2(x): ''' Genarates a max-pool TensorFlow Op. This Op "strides" a window across the input x. In each window, the maximum value is selected and chosen to represent that region in the output Tensor.

**tf.nn.depthwise_conv2d,** channel_multiplier] containing in_channels convolutional filters of depth 1, The dilation rate in which we sample input values across the height and width� But you'd have to find out by looking at the cudnn pooling API to see whether it is possible to pool across spatial dimensions and depth dimension at the same time. I'm sure Eigen's implementation of 3d pooling (CPU) doesn't do simultaneous pooling across more than the spatial dimensions, and it would probably be a lot of work to enable this.

**tf.nn.conv2d,** The dimension order is determined by the value of data_format , see above for details. Dilations in the batch and depth dimensions if a 4-d tensor must be 1. Using TensorFlow and GradientTape to train a Keras model. In the first part of this tutorial, we will discuss automatic differentiation, including how it’s different from classical methods for differentiation, such as symbol differentiation and numerical differentiation.

**TensorFlow MaxPool: Working with CNN Max Pooling Layers in ,** Learn the role of the maxpool operation in CNN and how to use TensorFlow tf. layers. in which feature data collected in the convolution layers are downsampled or “pooled”, scale TensorFlow CNN experiments using the MissingLink deep learning platform. Using tf.nn.max_pool for Full Control Over the Pooling Layer. The following are 40 code examples for showing how to use tensorflow.python.ops.math_ops.rsqrt().These examples are extracted from open source projects. You can vote up the ones you like or vote down the ones you don't like, and go to the original project or source file by following the links above each example.