如何在keras中获取LSTM的隐藏节点表示

时间:2017-03-23 06:24:46

标签: tensorflow deep-learning keras

我在keras中使用LSTM程序实现了一个模型。我试图获取LSTM层的隐藏节点的表示。这是获取隐藏节点的表示(存储在激活变量中)的正确方法吗?

model = Sequential()
model.add(LSTM(50, input_dim=sample_index))

activations = model.predict(testX)

model.add(Dense(no_of_classes, activation='softmax'))
model.compile(loss='categorical_crossentropy',  optimizer='adagrad', metrics=['accuracy'])
hist=model.fit(trainX, trainY, validation_split=0.15, nb_epoch=5, batch_size=20, shuffle=True, verbose=1)

1 个答案:

答案 0 :(得分:1)

编辑:获取隐藏表示的方式也是正确的。 参考:https://github.com/fchollet/keras/issues/41

训练模型后,您可以保存模型和重量。像这样:

from keras.models import model_from_json

json_model = yourModel.to_json()
open('yourModel.json', 'w').write(json_model)
yourModel.save_weights('yourModel.h5', overwrite=True)

然后您可以可视化LSTM图层的权重。像这样:

from keras.models import model_from_json
import matplotlib.pyplot as plt

model = model_from_json(open('yourModel.json').read())
model.load_weights('yourModel.h5')

layers = model.layers[1]  # find the LSTM layer you want to visualize, [1] is just an example
weights, bias = layers.get_weights()
plt.matshow(weights, fignum=100, cmap=plt.cm.gray)
plt.show()