合并一个LSTM模型输出和VGG模型输出时出错

时间:2018-11-16 09:06:20

标签: tensorflow keras lstm keras-layer vgg-net

#make custom embedding layer
embedding_layer = Embedding(len(word_index) + 1,
                        EMBEDDING_DIM,
                        weights=[embedding_matrix],
                        input_length=max_length_of_text,
                        trainable=False)

#make lstm model
inputs = Input((max_length_of_text, ))
x = embedding_layer(inputs)
x = LSTM(units=512, return_sequences=True)(x)
x = Dropout(0.1)(x)
x = LSTM(units=512, return_sequences=False)(x)
x = Dropout(0.1)(x)
x = Dense(1024, activation='tanh')(x)
lstm_model = Model(inputs,x)


#make vgg model
vgg = VGG16(weights=None, include_top=True)
vgg.load_weights('./vgg_weights.h5')
vgg.layers.pop()
for l in vgg.layers[:]:
    l.trainable = False
inp = vgg.input
out = Dense(1024, activation='tanh')(vgg.layers[-1].output)
vgg_model = Model(inp,out)


#make final model
fc_model = Sequential()
fc_model = Multiply()([vgg_model.output,lstm_model.output]))
fc_model.add(Merge([vgg_model, lstm_model], mode='mul'))
fc_model.add(Dropout(0.2))
fc_model.add(Dense(512, activation='tanh'))
fc_model.add(Dropout(0.2))
fc_model.add(Dense(26, activation='softmax'))
fc_model.compile(optimizer='rmsprop', loss='categorical_crossentropy',
    metrics=['accuracy'])


fc_model.summary()

'''
if os.path.exists(model_weights_filename):
   print "Loading Weights..."
   fc_model.load_weights(model_weights_filename)

'''


#Train

#--
img_lis_train = img_lis[ : 108032]
img_lis_test  = img_lis[ 108032 : ]
questions_train = questions[ : 108032]
questions_test  = questions[ 108032 : ]
answers_train = answers[ : 108032]
answers_test  = answers[ 108032 : ]


def mygen(questions_train,img_lis_train,answers_train):
    start = 0  
    data_size = len(questions_train)
    batch_size = 64
    while True:          
        if( start+batch_size <= data_size ):
            batch_ques = questions_train[ start : start+batch_size ] 
            batch_ans = answers_train[ start : start+batch_size ] 
            batch_img_names = img_lis_train[ start : start+batch_size ] 
        elif(start < data_size):
            batch_ques = questions_train[ start : ] 
            batch_ans = answers_train[ start : ] 
            batch_img_names = img_lis_train[ start : start+batch_size ] 
        else:
            break       

        batch_img = []
        for img_name in batch_img_names:
            img = load_img('./dataset/images/' + str(img_name) + '.png' , target_size = (224,224))
            img = img_to_array(img)   
            batch_img.append( preprocess_input(img) )    

        start += batch_size
        print('start = ' + str(start))
        yield [np.array(batch_img), np.array(batch_ques)] ,np.array(batch_ans)


class WeightsSaver(Callback):
    def __init__(self, N):
        self.N = N
        self.batch = 0

    def on_batch_end(self, batch, logs={}):
        if self.batch % self.N == 0:
            name = './weights/weights.h5'
            self.model.save_weights(name)
        self.batch += 1

fc_model.load_weights('./weights/weights.h5')
fc_model.fit_generator(mygen(questions_train, img_lis_train , answers_train ), steps_per_epoch = 1688, epochs = 100 ,callbacks=[WeightsSaver(10)])

我得到的错误是fc_model正在使用以下方式转换为张量:

fc_model = Multiply()([vgg_model.output,lstm_model.output]))

以及早期版本的Keras

fc_model.add(Merge([vgg_model, lstm_model], mode='mul'))

正在运行,但是从新版本的Keras中删除了Merge。我想在fc_model中将输出LSTM模型和VGG模型相乘。有人可以建议我吗?

1 个答案:

答案 0 :(得分:1)

您需要使用Keras functional API才能构建非顺序模型。因此您的代码将如下所示:

mult = Multiply()([vgg_model.output,lstm_model.output])
x = Dropout(0.2)(mult)
x = Dense(512, activation='tanh')(x)
x = Dropout(0.2)(x)
out = Dense(26, activation='softmax')(x)

fc_model = Model([vgg_model.input, lstm_model.input], out)