通过嵌套的tf.map_fn

时间:2019-11-28 18:58:18

标签: tensorflow nested gradient backpropagation map-function

我想在一个维度为 [batch_size,H,W,n_channels] 的矩阵中,在与每个像素的深度通道相对应的每个向量上映射一个TensorFlow函数。

换句话说,对于我批量处理的每个尺寸为 H x W 的图像:

  1. 我提取了一些尺寸为 H x W 的特征图 F_k (其数量为n_channels)(因此,所有特征图一起都是形状的张量< em> [H,W,n_channels] ;
  2. 然后,我希望对与 i-th 行和 j-th 相关的向量 v_ij 应用自定义函数每个要素地图的 F_k 列,但整体上探索深度通道(例如, v 的尺寸为 [1 x 1 x n_channels] )。理想情况下,所有这些都将并行发生。

下面是解释该过程的图片。与图片的唯一区别是输入和输出“接收场”的大小均为1x1(将功能独立应用于每个像素)。

enter image description here

这类似于将1x1卷积应用于矩阵;但是,我需要在深度通道上应用更通用的功能,而不是简单的求和运算。

我认为tf.map_fn()是一个选择,我尝试了以下解决方案,在该解决方案中,我递归使用tf.map_fn()来访问与每个像素相关的特征。但是,这种情况似乎不太理想,最重要的是在尝试反向传播梯度时会引发错误

您是否知道发生这种情况的原因以及如何构造代码以避免错误?

这是我当前对函数的实现:

import tensorflow as tf
from tensorflow import layers


def apply_function_on_pixel_features(incoming):
    # at first the input is [None, W, H, n_channels]
    if len(incoming.get_shape()) > 1:
        return tf.map_fn(lambda x: apply_function_on_pixel_features(x), incoming)
    else:
        # here the input is [n_channels]
        # apply some function that applies a transfomration and returns a vetor of the same size
        output = my_custom_fun(incoming) # my_custom_fun() doesn't change the shape
        return output

和我的代码正文:

H = 128
W = 132
n_channels = 8

x1 = tf.placeholder(tf.float32, [None, H, W, 1])
x2 = layers.conv2d(x1, filters=n_channels, kernel_size=3, padding='same')

# now apply a function to the features vector associated to each pixel
x3 = apply_function_on_pixel_features(x2)  
x4 = tf.nn.softmax(x3)

loss = cross_entropy(x4, labels)
optimizer = tf.train.AdamOptimizer(lr)
train_op = optimizer.minimize(loss)  # <--- ERROR HERE!

尤其是以下错误:

File "/home/venvs/tensorflowGPU/lib/python3.6/site-packages/tensorflow/python/ops/control_flow_ops.py", line 2481, in AddOp
    self._AddOpInternal(op)

File "/home/venvs/tensorflowGPU/lib/python3.6/site-packages/tensorflow/python/ops/control_flow_ops.py", line 2509, in _AddOpInternal
    self._MaybeAddControlDependency(op)
File "/home/venvs/tensorflowGPU/lib/python3.6/site-packages/tensorflow/python/ops/control_flow_ops.py", line 2547, in _MaybeAddControlDependency
    op._add_control_input(self.GetControlPivot().op)

AttributeError: 'NoneType' object has no attribute 'op'

可以找到整个错误堆栈和代码here。 感谢您的帮助,

G。


更新

根据@ thushv89的建议,我为该问题添加了可能的解决方案。我仍然不知道为什么以前的代码不起作用。对此仍然有任何见识。

2 个答案:

答案 0 :(得分:0)

按照@ thushv89的建议,我重整了数组,应用了函数,然后重整了它(以避免避免tf.map_fn递归)。我仍然不知道为什么以前的代码不起作用,但是当前的实现允许将梯度传播回以前的层。我将其留在下面,也许对此感兴趣的人

def apply_function_on_pixel_features(incoming, batch_size):

    # get input shape:
    _, W, H, C = incoming.get_shape().as_list()
    incoming_flat = tf.reshape(incoming, shape=[batch_size * W * H, C])

    # apply function on every vector of shape [1, C]
    out_matrix = my_custom_fun(incoming_flat)  # dimension remains unchanged

    # go back to the input shape shape [None, W, H, C]
    out_shape = tf.convert_to_tensor([batch_size, W, H, C])
    out_matrix = tf.reshape(out_matrix, shape=out_shape)

    return out_matrix

请注意,现在我需要给批处理大小以正确调整张量的形状,因为如果我给None或-1作为维度,TensorFlow会抱怨。

对以上代码的任何评论和见解仍将不胜感激。

答案 1 :(得分:0)

@gabriele关于必须依赖batch_size,您是否尝试过以下方式?此函数不依赖于batch_size。您可以将map_fn替换为自己喜欢的任何内容。

def apply_function_on_pixel_features(incoming):

    # get input shape:
    _, W, H, C = incoming.get_shape().as_list()
    incoming_flat = tf.reshape(incoming, shape=[-1, C])

    # apply function on every vector of shape [1, C]
    out_matrix = tf.map_fn(lambda x: x+1, incoming_flat)  # dimension remains unchanged

    # go back to the input shape shape [None, W, H, C]
    out_matrix = tf.reshape(out_matrix, shape=[-1, W, H, C])

    return out_matrix

我测试的完整代码如下。

import numpy as np
import tensorflow as tf
from tensorflow.keras.losses import categorical_crossentropy

def apply_function_on_pixel_features(incoming):

    # get input shape:
    _, W, H, C = incoming.get_shape().as_list()
    incoming_flat = tf.reshape(incoming, shape=[-1])

    # apply function on every vector of shape [1, C]
    out_matrix = tf.map_fn(lambda x: x+1, incoming_flat)  # dimension remains unchanged

    # go back to the input shape shape [None, W, H, C]
    out_matrix = tf.reshape(out_matrix, shape=[-1, W, H, C])

    return out_matrix

H = 32
W = 32
x1 = tf.placeholder(tf.float32, [None, H, W, 1])
labels = tf.placeholder(tf.float32, [None, 10])
x2 = tf.layers.conv2d(x1, filters=1, kernel_size=3, padding='same')

# now apply a function to the features vector associated to each pixel
x3 = apply_function_on_pixel_features(x2)  
x4 = tf.layers.flatten(x3)
x4 = tf.layers.dense(x4, units=10, activation='softmax')

loss = categorical_crossentropy(labels, x4)
optimizer = tf.train.AdamOptimizer(0.001)
train_op = optimizer.minimize(loss)


x = np.zeros(shape=(10, H, W, 1))
y = np.random.choice([0,1], size=(10, 10))


with tf.Session() as sess:
  tf.global_variables_initializer().run()
  sess.run(train_op, feed_dict={x1: x, labels:y})