如何在Keras中为前馈神经网络实现状态自定义激活功能?

时间:2019-04-22 05:26:48

标签: python machine-learning keras neural-network

我正在尝试使用Keras和Theano后端实现自定义激活功能,该功能将输入值在多个时间步中累积在变量vmem中。一旦vmem超过预定阈值vth,激活功能将输出1(尖峰)并将vmem重置为0,否则输出0。

我尝试通过在Keras中扩展Layer类来模拟此功能,如下所述:https://keras.io/layers/writing-your-own-keras-layers/。到目前为止,我的代码如下:

 class SpikeRelu(Layer):
 def __init__(self, threshold, **kwargs):
     super(SpikeRelu, self).__init__(**kwargs)
     self.threshold = threshold
     self._vmem = 0

 def call(self, x):
     self._vmem += x
     op_spikes = K.cast(K.greater_equal(self._vmem, self.threshold), K.floatx())
     if (op_spikes == 1):
         self._vmem = 0
     print(self._vmem)
     return op_spikes

 def get_config(self):
     config = {
         'vmem': self._vmem,
         'threshold': self.threshold
     }
     base_config = super(SpikeRelu, self).get_config()
     return dict(list(base_config.items()) + list(config.items()))

 def compute_output_shape(self, input_shape):
     return input_shape

然后我建立了一个带有SpireRelu()作为唯一层的1层模型:

def testSpikeRelu(inSize, th):
  model = Sequential()
  model.add(Flatten(input_shape=inSize))
  model.add(SpikeRelu(threshold=th))
  return model

然后编写了以下驱动程序代码以验证其功能:

test_input = np.arange(10)
num_batches = 1
test_input = test_input.reshape((1, 1, 10))
vth = 5 # threshold voltage
timesteps = 2
output = []
for b in range(num_batches):
    model = testSpikeRelu((1, 10), vth)
    for t in range(timesteps):
        output = model.predict(test_input, verbose=1)
        print (output)

我曾期望在t = 0时看到output = [[0. 0. 0. 0. 0. 1. 1. 1. 1. 1.]],而在t = 1时看到output = [[0. 0. 0. 1. 1. 1. 1. 1. 1. 1.]]。但是我在每个时间步长上得到的output值是: [[0. 0. 0. 0. 0. 1. 1. 1. 1. 1.]]表示self._vmem的值在下一次推理运行期间未保存,并且始终重置为0。那么,如何在多次推理中保存每个神经元的内部状态self._vmem

0 个答案:

没有答案