我是机器学习的新手,我正在练习制作一个近似函数的神经网络。仅出于学习目的并查看神经网络的状态,我想知道神经网络的初始系数。这是一个可重现的示例:
import sklearn.neural_network as sknn
import numpy as np
LIMIT = 10.0
# Function I want to approximate
def funcion(x):
if x<3:
return 0
if x>7:
return 12
return 3*(x-3)
X = np.array([])
Y = np.array([])
# Data training set
for x in np.arange(0.0, LIMIT, 1.5):
X = np.append(X, x)
Y = np.append(Y, funcion(x))
X = np.append(X,10)
Y = np.append(Y, funcion(10))
X = np.reshape(X, (-1,1))
nn = sknn.MLPRegressor(
learning_rate_init=0.01,
learning_rate = 'constant',
activation='logistic',
hidden_layer_sizes=(2,1),
max_iter=1,
random_state=None)
print('coefficients: ', nn.coefs_) # THIS GIVES THE ERROR
nn.fit(X, Y)
输出:
Traceback (most recent call last):
File "aproxFun2.py", line 41, in <module>
print('coefficients: ', nn.coefs_)
AttributeError: 'MLPRegressor' object has no attribute 'coefs_'
每当我调用nn.coefs_
函数之后,只要调用nn.fit(X, Y)
,它都会打印数据,但是我想在拟合之前知道这些值。
答案 0 :(得分:0)
在调用_initialize(self, y, layer_units)
(在fit()中)之前不会初始化coefs_,所以我想你不能。
答案 1 :(得分:0)
读取源代码(link),使用以下方法在所有层上迭代执行初始化:
def _init_coef(self, fan_in, fan_out):
if self.activation == 'logistic':
# Use the initialization method recommended by
# Glorot et al.
init_bound = np.sqrt(2. / (fan_in + fan_out))
elif self.activation in ('identity', 'tanh', 'relu'):
init_bound = np.sqrt(6. / (fan_in + fan_out))
else:
# this was caught earlier, just to make sure
raise ValueError("Unknown activation function %s" %
self.activation)
coef_init = self._random_state.uniform(-init_bound, init_bound,
(fan_in, fan_out))
intercept_init = self._random_state.uniform(-init_bound, init_bound,
fan_out)
return coef_init, intercept_init
其中fan_in
和fan_out
代表输入值的大小。输出层。
因此,您可以通过运行上面的代码实际找出固定随机状态的初始化。
关于初始化本身,权重是从以0为中心的均匀分布中采样的。分布的支持是输出和输入层大小的函数。有关更多详细信息,请参见参考文件 Understanding the difficulty of training deep feedforward neural networks。