Keras根据其他模型的输出动态裁剪输入

时间:2018-08-28 11:05:48

标签: python tensorflow keras

我正在尝试使用Lambda层在Keras中构建裁剪模块,但是在尝试编译模型时,我始终收到以下错误消息:

“ AttributeError:NoneType对象没有属性'_inbound_nodes'”

这是我要完成的工作:

模型1的输出为None,6、34、60张量,它是对象位置热图的粗略近似值。

coarse_model = load_model('coarse_model.h5')
input_1 = Input(shape=(270,480,3))
input_2 = Input(shape=(135,240,3))
output_hmap = coarse_model([input_1,input_2])

在这里,我从原始模型中提取了一些中间层,并将其用作裁剪的输入:

intermediate_layer_model1 = coarse_model.get_layer("input1").output
intermediate_layer_model2 = coarse_model.get_layer("input2").output
intermediate_layer_model3 = coarse_model.get_layer("input3").output
intermediate_layer_model4 = coarse_model.get_layer("input4").output

第1层的形状为None,270,480,128,并且随着您进入更深的层而逐渐变小。这是我要裁剪的图层,围绕先前模型的输出位置放置了一个36x36的补丁。裁剪模块很复杂,是Keras中的Lambda层。我试图将其纯粹用张量流编写,并且期望的输出是每个None批次的裁剪部分,每个intermediate_layer中的6层都是如此。

layer1 = Lambda(crop_module,output_shape=(36,36,128))
       ([output_hmap,K.constant(36),tf.constant(0,dtype=tf.int32),
       K.constant([270,480],dtype=tf.float32),intermediate_layer_model1])

def crop_module(data):

    output = data[0]
    dimension = data[1]
    idx = data[2]
    shaped = data[3]

    def hmap_to_coord(hmap):
        # flatten the Tensor along the height and width axes

        flat_tensor = tf.cast(flat()(hmap),dtype='float32')
        index_y = tf.argmax(flat_tensor,axis=0)

          # argmax of the flat tensor
        index_y = tf.cast(tf.divide(tf.cast(tf.argmax(flat_tensor,axis=1),tf.float32),(tf.cast(tf.shape(hmap)[2],tf.float32))),tf.int32)
        index_x = tf.argmax(flat_tensor,axis=1,output_type=tf.int32) - index_y*tf.shape(hmap)[2]

        index_y = tf.cast(tf.divide(index_y+1,tf.shape(hmap)[1]),tf.float32)
        index_x = tf.cast(tf.divide(index_x+1,tf.shape(hmap)[2]),tf.float32)

          # stack and return 2D coordinates
        return tf.stack([index_x,index_y])

    def find_coordinates(coordinates,dim,shape):
        shape = tf.cast(shape,tf.float32)
        dim = tf.constant(36)
        half_dim = tf.cast(tf.divide(dim,2),dtype=tf.int32)

        i = tf.constant(0)
        steps = tf.shape(coordinates)[0]
        initial_outputs = tf.TensorArray(dtype=tf.float32, size=steps)

        def while_condition(i,*args):
            return tf.less(i, steps)

        def body(i,shape,outputs_):

            # do something here which you want to do in your loop
            # increment i

            coordinates_x = tf.cast(tf.multiply(shape[1],coordinates[i][0]),tf.int32)
            coordinates_y = tf.cast(tf.multiply(coordinates[i][1],shape[0]),tf.int32)


            shape = tf.cast(shape,tf.int32)
            x = tf.cond(tf.less(coordinates_x,dim+1),lambda: tf.stack([0,shape[1] - dim ]), 
                    lambda: tf.cond(tf.less(coordinates_x, shape[1] - dim+1), lambda: tf.stack((shape[1] - dim,0)),
                                    lambda: tf.cond(tf.equal(dim,9),lambda:tf.stack((coordinates_x - 5,shape[1] - coordinates_x - 4)),
                                                    lambda:tf.stack((coordinates_x - half_dim,shape[1] - coordinates_x - half_dim)))))

            y = tf.cond(tf.less(coordinates_y,dim+1),lambda: tf.stack([shape[0] - dim, 0 ]), 
                    lambda: tf.cond(tf.less(coordinates_y, shape[1] - dim+1), lambda: tf.stack((0,shape[0] - dim)),
                                    lambda: tf.cond(tf.equal(dim,9),lambda:tf.stack((shape[0] - coordinates_y - 4,coordinates_y-5)),
                                                    lambda:tf.stack((shape[0] - coordinates_y - half_dim,coordinates_y-half_dim)))))
            outs = tf.stack([x,y])
            outputs_.write(i,outs)

            return tf.add(i, 1),shape,outputs_

        # do the loop:
        r,shape,output = tf.while_loop(while_condition, body, [i,shape,initial_outputs])



        return output.stack()

    ### Get the coordinates for each point from the hmap output of model1
    output = tf.cast(output,tf.int32)
    idx = tf.cast(idx,tf.int32)
    coordinat = hmap_to_coord(output[:,idx])

    coord = tf.cast(find_coordinates(coordinat,dimension,shaped),tf.int32)

    ## do this for each of the points in the output
    i = tf.constant(0)
    steps = tf.shape(coordinat)[0]
    initial_outputs = tf.TensorArray(dtype=tf.float32, size=steps)

    def while_condition(i,*args):
        return tf.less(i, steps)

    def body(i,data,coord1,shaped,outputs_):
        # do something here which you want to do in your loop
        # increment i
        shaped = tf.cast(shaped,tf.int32)
        crop1 = data[4][i][coord1[i][0][0]:shaped[0]-coord1[i][0][1],coord1[i][1][0]:shaped[1]-coord1[i][1][1],:]

        outputs_.write(i,crop1)

        return tf.add(i, 1),data,coord1,shaped,outputs_

    # do the loop:
    r,dat,cord1,shape,outputs = tf.while_loop(while_condition, body, [i,data,coord,shaped,initial_outputs])

    return outputs.stack()

如果我尝试编译模型,即使是在第一层,也会出现inbound_nodes错误。

final_model = Model(inputs=[input_1,input_2],outputs=layer1)

我是否可能在Keras中实现目标?能否通过其他模型的张量确定每批的裁剪尺寸?

0 个答案:

没有答案