如何将CNN模型输入张量从形状(?,128,128,3)转换为(?,?,?,3)?

时间:2019-06-21 23:49:00

标签: tensorflow keras deep-learning

我正在尝试使用keras可视化CNN模型过滤器可视化。这是我正在关注https://keras.io/examples/conv_filter_visualization/的代码的链接。 注意:我是初学者,正在学习CNN。

该代码对于具有输入形状(?,?,?,3)的VGG-16模型可以正常工作。我想使此代码适用于具有定义的宽度和高度的输入的CNN模型(例如:(?,128,128,3)。 我尝试将模型输入从(?,128,128,3)更改为(?,?,?,3)。但最后却显示了错误。

背景:我想将其重塑为(?,?,?,3),以便可以进行渐进式放大和张量调整以改善图像的可视化。

这是我的笔记本代码

# these are the parameters from other part of the code:
# input_img = model.inputs[0]
# layer_output = layer_dict[layer_name].output
# filter_index = 13 ( can be any number between bounds)
# layer_name = 'conv2d_8'
# step=1.
# epochs=10
# upscaling_steps=9
# upscaling_factor=1.2
# output_dim=(180, 180)
# filter_range=(0, 2)
def _generate_filter_image(input_img,
                               layer_output,
                               filter_index):
        """Generates image for one particular filter.

        # Arguments
            input_img: The input-image Tensor.
            layer_output: The output-image Tensor.
            filter_index: The to be processed filter number.
                          Assumed to be valid.

        #Returns
            Either None if no image could be generated.
            or a tuple of the image (array) itself and the last loss.
        """
        s_time = time.time()
        input_img = tf.reshape(input_img,[-1,-1,-1,3]) 
        print("input image shape after reshape", input_img.shape)

        # we build a loss function that maximizes the activation
        # of the nth filter of the layer considered

        if K.image_data_format() == 'channels_first':
            loss = K.mean(layer_output[:, filter_index, :, :])
        else:
            loss = K.mean(layer_output[:, :, :, filter_index])

        # we compute the gradient of the input picture wrt this loss
        grads = K.gradients(loss, [input_img])[0]

        # normalization trick: we normalize the gradient
        grads = normalize(grads)

        # this function returns the loss and grads given the input picture
        iterate = K.function([input_img], [loss, grads])

        # we start from a gray image with some random noise
        intermediate_dim = tuple(
            int(x / (upscaling_factor ** upscaling_steps)) for x in 
        output_dim)
        if K.image_data_format() == 'channels_first':
            input_img_data = np.random.random(
                (1, 3, intermediate_dim[0], intermediate_dim[1]))
        else:
            input_img_data = np.random.random(
                (1, intermediate_dim[0], intermediate_dim[1], 3))

        input_img_data = np.uint8(np.random.uniform(150, 180, (1,128, 128, 
        3)))/255

        # Slowly upscaling towards the original size prevents
        # a dominating high-frequency of the to visualized structure
        # as it would occur if we directly compute the 412d-image.
        # Behaves as a better starting point for each following dimension
        # and therefore avoids poor local minima
        for up in reversed(range(upscaling_steps)):
            # we run gradient ascent for e.g. 20 steps
            t1= time.time()
            for _ in range(epochs):

                loss_value, grads_value = iterate([input_img_data])
                input_img_data += grads_value * step


            # Calulate upscaled dimension
            intermediate_dim = tuple(
                int(x / (upscaling_factor ** up)) for x in output_dim)
            # Upscale
            img = deprocess_image(input_img_data[0])
            img = np.array(pil_image.fromarray(img).resize(intermediate_dim, pil_image.BICUBIC))
            input_img_data = [process_image(img, input_img_data[0])]

        t2 = time.time()

我收到此错误:

ValueError: Tried to convert 'x' to a tensor and failed. Error: None values not supported.

1 个答案:

答案 0 :(得分:0)

当对不同形状值的梯度进行归一化时会出现此问题。

问题出在:

grads = normalize(K.gradients(loss, conv_output)[0])

将其更改为:

grads = normalize(_compute_gradients(loss, [conv_output])[0])

如果这行得通,那么一切都很好,否则
如果出现错误:zip argument #1 must support iteration,请使用

    grads = normalize(K.gradients(loss, conv_output)[0])
    # grads = normalize(_compute_gradients(loss, conv_output)[0])    
    gradient_function = K.function([model.inputs[0]], [conv_output, grads]) 

选中此issue以获得更多信息!