如何在Keras的最后一个迷你批处理中获得Dropout层的激活值

时间:2019-02-22 05:53:33

标签: keras autoencoder loss-function dropout

我一直在尝试使用具有自定义损失功能的自动编码器,其中损失功能取决于辍学层掩盖和未掩盖的值的不同。所以我试图在上一次迭代中访问dropout层中的值。这是代码示例-

import keras.backend as K
from keras import optimizers

batchSize = 30

a = 1
b = 1

def lossFunc(model,a,b):

    def mse3(y_true, y_pred):  

        temp=y_true>0
        y_pred0=y_pred*tf.cast(temp,tf.float32)

        y_inter = model.get_layer("intermediate").output
        y_inter_filter = tf.math.not_equal(y_inter, tf.zeros_like(y_inter))
        y_pred1 = y_pred*tf.cast(y_inter_filter,tf.float32)
        y_true1 = y_true*tf.cast(y_inter_filter,tf.float32)

        y_inter_filter2 = tf.math.logical_xor(temp, y_inter_filter)
        y_pred2 = y_pred*tf.cast(y_inter_filter2,tf.float32)
        y_true2 = y_true*tf.cast(y_inter_filter2,tf.float32)

        return a*K.sum(K.square(y_pred1 - y_true1), axis=-1) #+ b*K.sum(K.square(y_pred2 - y_true2), axis=-1)

    return mse3

# def lossFunc(model,a,b):
#     def mse3(y_true, y_pred):
#         temp=y_true>0
#         y_pred=y_pred*tf.cast(temp,tf.float32)
#         return K.sum(K.square(y_pred - y_true), axis=-1)
#     return mse3

sgd = optimizers.SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)
model.compile(optimizer = sgd, loss = lossFunc(model,a,b))

model.fit(x_input, x_output, epochs = 100, batch_size = batchSize)

我已经遍历了Dropout层的源代码。但是,我不确定model.get_layer("intermediate").output返回的值是什么。但是我确信这不是我想要的东西,因为损耗值与预期的不同。任何方向将不胜感激。谢谢。

建筑-

from keras.layers import Input, Dense, Concatenate, Lambda, Dropout
from keras.models import Model
import tensorflow as tf
from keras.regularizers import l2

group = []
dropout_rate = 0.5
reg = .001

inputTensor = Input(shape = (n_criterias*itemCount,))

for i in range(itemCount):
    group.append(Dense(1, kernel_regularizer=l2(reg), activation='tanh')(Lambda(lambda x: x[:,n_criterias*i:n_criterias*(i+1)], output_shape=((n_criterias,)))(inputTensor)))

intermediate = Dropout(rate=dropout_rate, name='intermediate')(Concatenate()(group))

layer1 = 200
layer2 = 100
x = Dense(layer1, kernel_regularizer=l2(reg), activation='tanh')(intermediate)
coding = Dense(layer2, kernel_regularizer=l2(reg), activation='tanh', name='coding')(x)
x = Dense(layer1, kernel_regularizer=l2(reg), activation='tanh')(coding)
outputTensor = Dense(itemCount, kernel_regularizer=l2(reg), activation='tanh')(x)

model = Model(inputs = inputTensor, outputs = outputTensor)

0 个答案:

没有答案