为什么精度较低0.01,但是预测很好(99,99%)

时间:2019-11-06 16:56:59

标签: python tensorflow neural-network loss

我使用Python中的TensorFlow 2建立了自己的第一个神经网络。 我的想法是建立一个神经网络,该网络能够找到将二进制数字(8位)转换为十进制数字的解决方案。 经过几次尝试:是的,它非常精确!

但是我不明白:准确度很低。

第二件事是:模型必须训练超过200.000个值! 对于256个可能的答案。我的代码/模型中的失败在哪里?

#dataset
def dataset(length, num):
 global testdata, solution
 testdata = np.random.randint(2, size=(num, length))

 solution = testdata.copy()
 solution = np.zeros((num, 1))

 for i in range(num):
  for n in range(length):
   x = testdata [i,length - n -1] * (2 ** n)
   solution [i] += x

length = 8
num = 220000
dataset (length, num)

#Modell
model = tf.keras.models.Sequential([
  tf.keras.layers.Dense(8, activation='relu'),
  tf.keras.layers.Dense(1, activation='relu')
])

model.compile(optimizer='adam',
              loss='mean_squared_error',
              metrics=['accuracy'])

#Training und Evaluate
model.fit(testdata, solution, epochs=4)
model.evaluate(t_testdata,  t_solution, verbose=2)
model.summary()

损耗:6.6441e-05-精度:0.0077

应该不是0.77或更高吗?

1 个答案:

答案 0 :(得分:0)

您不应将准确性视为回归问题的度量标准,因为您尝试输出单个值,即使精度的微小变化将导致其为零,也可以考虑以下示例。

考虑到您试图预测值15,并且模型返回值14.99,结果精度仍然为零。

m = tf.keras.metrics.Accuracy()
_ = m.update_state([[15]], [[14.99]])
m.result().numpy()

结果:

0.0  

您可以考虑使用以下指标进行回归。

  • 回归指标
  • MeanSquaredError类
  • RootMeanSquaredError类
  • MeanAbsoluteError类
  • MeanAbsolutePercentageError类
  • MeanSquaredLogarithmicError类
  • CosineSimilarity类
  • LogCoshError类

我用上面列出的指标之一尝试了相同的问题,下面是结果。

def bin2int(bin_list):
    #bin_list = [0, 0, 0, 1]
    int_val = ""
    for k in bin_list:
        int_val += str(int(k))
    #int_val = 11011011    
    return int(int_val, 2) 

def dataset(num):
    # num - no of samples
    bin_len = 8
    X = np.zeros((num, bin_len))
    Y = np.zeros((num))

    for i in range(num):
        X[i] = np.around(np.random.rand(bin_len)).astype(int)
        Y[i] = bin2int(X[i])
    return X, Y  


no_of_smaples = 220000
trainX, trainY = dataset(no_of_smaples)
testX, testY = dataset(5) 


model = tf.keras.models.Sequential([
  tf.keras.layers.Dense(8, activation='relu'),
  tf.keras.layers.Dense(1, activation='relu')
])

model.compile(optimizer='adam',
              loss='mean_absolute_error',
              metrics=['mse']) 

model.fit(trainX, trainY,validation_data = (testX,testY),epochs=4) 
model.summary() 

输出:

Epoch 1/4
6875/6875 [==============================] - 15s 2ms/step - loss: 27.6938 - mse: 2819.9429 - val_loss: 0.0066 - val_mse: 5.2560e-05
Epoch 2/4
6875/6875 [==============================] - 15s 2ms/step - loss: 0.0580 - mse: 0.1919 - val_loss: 0.0066 - val_mse: 6.0013e-05
Epoch 3/4
6875/6875 [==============================] - 16s 2ms/step - loss: 0.0376 - mse: 0.0868 - val_loss: 0.0106 - val_mse: 1.2932e-04
Epoch 4/4
6875/6875 [==============================] - 15s 2ms/step - loss: 0.0317 - mse: 0.0466 - val_loss: 0.0177 - val_mse: 3.2429e-04
Model: "sequential_11"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
dense_24 (Dense)             multiple                  72        
_________________________________________________________________
dense_25 (Dense)             multiple                  9         
_________________________________________________________________
round_4 (Round)              multiple                  0         
=================================================================
Total params: 81
Trainable params: 81
Non-trainable params: 0

预测:

model.predict([[0., 0., 0., 0., 0., 1., 1., 0.]])

array([[5.993815]],dtype = float32)