Keras带Tensorflow中间层的分批萃取

时间:2018-08-25 16:32:19

标签: python tensorflow keras deep-learning batch-processing

我目前正在尝试利用已经受过训练的DL模型的中间层作为对给定输入的嵌入。下面的代码已经可以用于获取所需的图层,但是对于大量输入而言,迭代地执行此操作非常慢。

model = load_model('model.h5')
inp = model.input
outputs = [layer.output for layer in model.layers]
functors = [K.function([inp]+ [K.learning_phase()], [out]) for out in outputs]

def text2tensor(text):
    """Convert string to tensor"""
    tensor = tokenizer.texts_to_sequences([text])
    tensor = pad_sequences(tensor, maxlen=10, padding='pre')
    return tensor

def get_embedding(tensor, at_layer):
    """Get output at particular layer in network """
    functors = [K.function([inp]+ [K.learning_phase()], [out]) for out in outputs][at_layer-1]
    layer_outs = [func([tensor, 1.]) for func in [functors]]
    return layer_outs[0][0]


texts = ['this is my first text',
         'this is my second text',
         'this is my third text',
         .....nth text]

embeddings = np.empty((0,256))
for t in texts:
    tensor = text2tensor(t)
    embedding = get_embedding(tensor,at_layer=4)
    embeddings = np.append(embeddings,[embedding[0]],axis=0)

我如何利用批处理,而不必一个接一个地执行此操作?使用上述实现速度非常慢,但是可以。

1 个答案:

答案 0 :(得分:1)

除了我在评论中提到的要点外,建议您创建一个模型,而不要创建一个后端函数:

input_tensor = Input(shape=(10,))   # assuming maxlen=10
new_model = Model(input_tensor, my_desired_layer.output)

然后,首先对文本数据进行预处理以形成输入数组(即下面的my_data),然后使用predict方法并将一个batch_size参数传递给它以利用批处理:

out = new_model.predict(my_data)   # the default batch size is 32