如何在Neupy或Theano中实现自定义激活功能(通过梯度下降调整均值和方差的RBF内核)以用于Neupy。
{快速背景:渐变下降适用于网络中的每个参数。我想创建一个专门的功能空间,其中包含优化的功能参数,因此Neupy}
我认为我的问题在于参数的创建,尺寸的大小以及它们的连接方式。
感兴趣的主要功能。
class RBF(layers.ActivationLayer):
def initialize(self):
super(RBF, self).initialize()
self.add_parameter(name='mean', shape=(1,),
value=init.Normal(), trainable=True)
self.add_parameter(name='std_dev', shape=(1,),
value=init.Normal(), trainable=True)
def output(self, input_value):
return rbf(input_value, self.parameters)
def rbf(input_value, parameters):
K = _outer_substract(input_value, parameters['mean'])
return np.exp(- np.linalg.norm(K)/parameters['std_dev'])
def _outer_substract(x, y):
return (x - y.T).T
将非常感谢帮助,因为这将提供有关如何自定义neupy网络的深刻见解。文档可以在某些领域使用一些工作来说至少......
答案 0 :(得分:1)
当图层更改输入变量的形状时,它必须通知后续图层有关更改的信息。对于这种情况,它必须具有自定义的output_shape
属性。例如:
from neupy import layers
from neupy.utils import as_tuple
import theano.tensor as T
class Flatten(layers.BaseLayer):
"""
Slight modification of the Reshape layer from the neupy library:
https://github.com/itdxer/neupy/blob/master/neupy/layers/reshape.py
"""
@property
def output_shape(self):
# Number of output feature depends on the input shape
# When layer receives input with shape (10, 3, 4)
# than output will be (10, 12). First number 10 defines
# number of samples which you typically don't need to
# change during propagation
n_output_features = np.prod(self.input_shape)
return (n_output_features,)
def output(self, input_value):
n_samples = input_value.shape[0]
return T.reshape(input_value, as_tuple(n_samples, self.output_shape))
如果你在终端中运行它,你会发现它有效
>>> network = layers.Input((3, 4)) > Flatten()
>>> predict = network.compile()
>>> predict(np.random.random((10, 3, 4))).shape
(10, 12)
在您的示例中,我可以看到一些问题:
rbf
函数不会返回theano表达式。在函数编译期间它应该失败np.linalg.norm
等函数将返回标量。 以下解决方案适合您
import numpy as np
from neupy import layers, init
import theano.tensor as T
def norm(value, axis=None):
return T.sqrt(T.sum(T.square(value), axis=axis))
class RBF(layers.BaseLayer):
def initialize(self):
super(RBF, self).initialize()
# It's more flexible when shape of the parameters
# denend on the input shape
self.add_parameter(
name='mean', shape=self.input_shape,
value=init.Constant(0.), trainable=True)
self.add_parameter(
name='std_dev', shape=self.input_shape,
value=init.Constant(1.), trainable=True)
def output(self, input_value):
K = input_value - self.mean
return T.exp(-norm(K, axis=0) / self.std_dev)
network = layers.Input(1) > RBF()
predict = network.compile()
print(predict(np.random.random((10, 1))))
network = layers.Input(4) > RBF()
predict = network.compile()
print(predict(np.random.random((10, 4))))
答案 1 :(得分:0)
虽然itdxer充分回答了这个问题,但我想为这个问题添加确切的解决方案。
network = layers.Input(size) > RBF() > layers.Softmax(num_out)
# Elementwise Gaussian (RBF)
def rbf(value, mean, std):
return T.exp(-.5*T.sqr(value-mean)/T.sqr(std))/(std*T.sqrt(2*np.pi))
class RBF(layers.BaseLayer):
def initialize(self):
# Begin by initializing.
super(RBF, self).initialize()
# Add parameters to train
self.add_parameter(name='means', shape=self.input_shape,
value=init.Normal(), trainable=True)
self.add_parameter(name='std_dev', shape=self.input_shape,
value=init.Normal(), trainable=True)
# Define output function for the RBF layer.
def output(self, input_value):
K = input_value - self.means
return rbf(input_value,self.means,self.std_dev
如果您对培训感兴趣。它很简单,
# Set training algorithm
gdnet = algorithms.Momentum(
network,
momenutm = 0.1
)
# Train.
gdnet.train(x,y,max_iter=100)
使用正确的输入和目标进行编译,并在元素方面更新均值和方差。