我正在尝试用两个自变量和一个因变量来近似一个函数。我正在使用神经网络。由于某种原因,我对隐藏层中一个神经元的近似是不连续的,这对于我正在使用的连续逻辑激活函数是不可能的。我怎样才能解决这个问题?功能为绿色,近似为红色
几个月前,我有代码给了我期望的输出。
我认为这可能与我传递输入/输出的方式有关。我检查了输入形状,它是(n个样本,n个特征),因为拟合/预测方法要求输入here。
这是我获取数据的方式
######################################## Collect dataset
n_input = 2; start = -1; stop = 1; steps = 0.01
x = mesh(n_input, start, stop, steps)
f = decaying_nd(x)
其中mesh和destroing_nd定义为
# @brief decaying_nd: produces n-dimensional exponentially decaying dataset
# @param x: nested list
# @returns: nested list. Same as mgrid, so for example 1D would look like [[1,2,3]]
def decaying_nd(x):
n = x.shape[0]
f = np.ones(x[0].shape)
for point,_ in np.ndenumerate(f):
for dim in range(n):
f[point] *= np.exp(-x[dim][point]**2)
return f
# @brief mesh: n-dimensional mgrid
# @param n: int
# @param start: float
# @param stop: float
# @param steps: float
# @returns: nested list
def mesh(n, start=-1, stop=1, steps=0.1):
if n < 1 or not isinstance(n, int):
raise ValueError('dimension passed to mesh is invalid')
mgrid = np.mgrid[tuple(slice(start, stop+steps, steps) for _ in range(n))]
return mgrid
######################################## Build model
n = 1
regression = MLPRegressor(
hidden_layer_sizes=(n,),
activation='logistic',
solver='lbfgs',
alpha=0,
max_iter=3000,
tol=1e-5,
n_iter_no_change=1000,
random_state=seed
)
######################################## Run model
X = x.reshape(-1, x.shape[0])
F = f.reshape(-1)
regression.fit(X, F)
最后,这就是我的绘制方式
# 3D prediction plot
y = regression.predict(X)
ax = plt.axes(projection='3d')
ax.scatter3D(x[0], x[1], f.reshape(-1), c=f.reshape(-1), cmap='Greens')
ax.scatter3D(x[0], x[1], y.reshape(-1), c=y.reshape(-1), cmap='Reds', marker='x')
plt.show()
答案 0 :(得分:0)
解决了。 X应该这样设置:
X = [[x[0].reshape(-1)[i], x[1].reshape(-1)[i]] for i in range(len(x[0].reshape(-1)))]