神经网络损失不准确

时间:2018-08-31 12:20:48

标签: python neural-network keras

我训练了一个用于神经网络的模型来寻找二次方程的根(判别式> = 0),但是当我在示例中进行检查时,即使损失很小,也显示出远非确切的答案。

损失图:

enter image description here

我的示例:

a = 1
b = -2
c = -24
model.predict(np.array([[a/max,b/max,c/max]])) * max
Out[421]: array([[-15.218947 ,  -1.3733944]], dtype=float32) #but should be 6 and -4

请看这里:

import numpy as np

from keras.models import Sequential
from keras.layers.core import Dense, Activation
from keras.utils import np_utils
from keras.layers import Dropout

x_in = np.array([]).reshape(0,3)
x_answer = np.array([]).reshape(0,2)

for i in range(300):
    a = np.random.randint(-1000,1000)
    b = np.random.randint(-1000,1000)
    c = np.random.randint(-1000,1000)
    D = np.power(b,2)-4*a*c
    if(a != 0):
        if(D >= 0):
            x1 = (-b+np.sqrt(D))/(2*a)
            x2 = (-b-np.sqrt(D))/(2*a)
            x_in = np.concatenate((x_in,[[a,b,c]]))
            x_answer = np.concatenate((x_answer,[[x1,x2]]))

np.random.seed()

NB_EPOCH = 300
VERBOSE = 1

x_in = np.asarray(x_in, dtype=np.float32)
x_answer = np.asarray(x_answer, dtype=np.float32)

min_in = np.nanmin(x_in)
min_answ = np.nanmin(x_answer)
min = -1000 #np.min(np.array([min_in,min_answ]))

max_in = np.nanmax(x_in)
max_answ = np.nanmax(x_answer)
max = 1000 #np.max(np.array([max_in,max_answ]))

x_in /= max
x_answer /= max

model = Sequential()
model.add(Dense(30, input_dim = 3, activation='relu'))
#model.add(Dropout(0.2))
model.add(Dense(40, activation='softmax'))
#model.add(Dropout(0.2))

model.add(Dense(50, activation='linear'))
model.add(Dense(2))

model.compile(loss='mse', optimizer='adam')

history = model.fit(x_in, x_answer, epochs=NB_EPOCH, verbose=VERBOSE)

更新:

enter image description here

该怎么办?

1 个答案:

答案 0 :(得分:0)

我认为对于(2000)** 3个可能的参数值a,b和c的参数空间,300个训练点太少了。您可以尝试为其提供更多培训数据。