我想做的是向网络的输出添加一个常数张量:
inputs = Input(shape=(config.N_FRAMES_IN_SEQUENCE, config.IMAGE_H, config.IMAGE_W, config.N_CHANNELS))
cnn = VGG16(include_top=False, weights='imagenet', input_shape=(config.IMAGE_H, config.IMAGE_W, config.N_CHANNELS))
x = TimeDistributed(cnn)(inputs)
x = TimeDistributed(Flatten())(x)
x = LSTM(256)(x)
x = Dense(config.N_LANDMARKS * 2, activation='linear')(x)
mean_landmarks = np.array(config.MEAN_LANDMARKS, np.float32)
mean_landmarks = mean_landmarks.flatten()
mean_landmarks_tf = tf.convert_to_tensor(mean_landmarks)
x = x + mean_landmarks_tf
model = Model(inputs=inputs, outputs=x)
optimizer = Adadelta()
model.compile(optimizer=optimizer, loss='mae')
但是我得到了错误:
ValueError: Output tensors to a Model must be the output of a Keras `Layer` (thus holding past layer metadata). Found: Tensor("add:0", shape=(?, 136), dtype=float32)
它在张量流中是微不足道的,但是如何在Keras中实现呢?
答案 0 :(得分:1)
似乎可以通过Lamda层完成:
from keras.layers import Lambda
def add_mean_landmarks(x):
mean_landmarks = np.array(config.MEAN_LANDMARKS, np.float32)
mean_landmarks = mean_landmarks.flatten()
mean_landmarks_tf = tf.convert_to_tensor(mean_landmarks)
x = x + mean_landmarks_tf
return x
x = Lambda(add_mean_landmarks)(x)
答案 1 :(得分:-1)
除了您自己的答案外,我还非常希望本机实现加法,例如Keras provides Keras.layers.Add
。
原因是我不确定自己的lambda函数如何下推到较低层:本质上,在TensorFlow(或您使用的任何其他后端)中,内部运算使用了经过高度优化的计算图,而自定义操作往往会转化为较重的(或在最坏的情况下是肿的)低级执行。
使用Keras.layers.Add
的正确方法就是简单地
x = keras.layers.Add()([x, add_mean_landmarks])