每个时代的隐藏层输出,并将其存储在keras中的列表中?

时间:2019-10-07 07:28:47

标签: python tensorflow keras neural-network activation-function

我有一个具有单个隐藏层的keras MLP。我正在使用一个多层感知器,在单个隐藏层中具有一些特定数量的节点。我想在批处理通过时为该隐藏层的所有神经元提取激活值,并且我想针对每个时期进行激活并将其存储在列表中以进行探索。我的表示像如下。

class myNetwork:
# Architecture of our neural network.
def multilayerPerceptron(self, Num_Nodes_hidden,input_features,output_dims,activation_function = 'relu', learning_rate=0.001,
                        momentum_val=0.00):
    model = Sequential()
    model.add(Dense(Num_Nodes_hidden, input_dim =input_features, activation=activation_function))
    model.add(Dense(output_dims,activation='softmax'))

    model.compile(loss = "categorical_crossentropy",
                  optimizer=SGD(lr = learning_rate, momentum = momentum_val),
                  metrics=['accuracy'])
    return model

下面是我要求我使用lambdacallbacks保存权重的另一部分的电话。我想要类似的东西,但是这次要保存隐藏层的实际激活值。

from keras.callbacks import LambdaCallback
import pickle
from keras.callbacks import ModelCheckpoint
from keras.callbacks import CSVLogger



# setting_parameters and calling inputs.
val = myNetwork()
vals = val.multilayerPerceptron(8,4,3,'relu',0.01)
batch_size_val = 20
number_iters = 200
weights_ih = []
weights_ho = []
activation_vals = []


get_activtaion = LambdaCallback(on_epoch_end=lambda batch, logs: activation_vals.append("What should I put Here"))




print_weights = LambdaCallback(on_epoch_end=lambda batch, logs: weights_ih.append(vals.layers[0].get_weights()))
print_weights_1 = LambdaCallback(on_epoch_end=lambda batch, logs: weights_ho.append(vals.layers[1].get_weights()))



history_callback = vals.fit(X_train, Y_train,
                                 batch_size=batch_size_val,
                                 epochs=number_iters,
                                 verbose=0,
                                 validation_data=(X_test, Y_test),
                                 callbacks = [csv_logger,print_weights,print_weights_1,get_activtaion])

我非常困惑,我不确定应该在GetActivtion中输入什么。请让我知道该怎么做才能获得该权重的迭代值的批次中所有样本的激活值。

1 个答案:

答案 0 :(得分:1)

weights_callback获取每个图层的权重:

weights_list = [] #[epoch][layer][unit(l-1)][unit(l)]

def save_weights(model):
    inner_list = []
    for layer in model.layers:
        inner_list.append(layer.get_weights()[0])
    weights_list.append(inner_list)

weights_callback = LambdaCallback(on_epoch_end = lambda batch, logs:save_weights(model))

activations_callback获取每个图层的输出:

activations_list = [] #[epoch][layer][0][X][unit]

def save_activations(model):
    outputs = [layer.output for layer in model.layers]
    functors = [K.function([model.input],[out]) for out in outputs]
    layer_activations = [f([X_input_vectors]) for f in functors]
    activations_list.append(layer_activations)

activations_callback = LambdaCallback(on_epoch_end = lambda batch, logs:save_activations(model))

应用回调:

result = model.fit(... , callbacks = [weights_callback, activations_callback], ...)

引用: