自定义损失函数张量流

时间:2019-09-04 23:59:38

标签: python tensorflow loss-function

我正在尝试测试自定义损失功能。

我测试网络是如何基于tf损失函数学习的,并由我来进行学习。 结果表明,网络不了解我是否通过了该功能。

import tensorflow as tf
import matplotlib.pyplot as plt

mnist = tf.keras.datasets.mnist
(x_train, y_train),(x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0

def loss(y_true, y_pred):
  return tf.keras.losses.sparse_categorical_crossentropy(y_true, y_pred)

model = tf.keras.models.Sequential([
  tf.keras.layers.Flatten(input_shape=(28, 28)),
  tf.keras.layers.Dense(128, activation='relu'),
  tf.keras.layers.Dropout(0.2),
  tf.keras.layers.Dense(10, activation='softmax')
])

model.compile(optimizer='adam',
              loss=loss,
              metrics=['accuracy'])

model.fit(x_train, y_train, epochs=7)
model.evaluate(x_test, y_test)
import tensorflow as tf
import matplotlib.pyplot as plt

mnist = tf.keras.datasets.mnist
(x_train, y_train),(x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0

def loss(y_true, y_pred):
  return tf.keras.losses.sparse_categorical_crossentropy(y_true, y_pred)

model = tf.keras.models.Sequential([
  tf.keras.layers.Flatten(input_shape=(28, 28)),
  tf.keras.layers.Dense(128, activation='relu'),
  tf.keras.layers.Dropout(0.2),
  tf.keras.layers.Dense(10, activation='softmax')
])

model.compile(optimizer='adam',
              loss=tf.keras.losses.sparse_categorical_crossentropy,
              metrics=['accuracy'])

model.fit(x_train, y_train, epochs=7)
model.evaluate(x_test, y_test)

第一个代码: 时代1/7

60000/60000 [==============================] - 4s 66us/sample - loss: 0.3035 - acc: 0.1015
Epoch 2/7
60000/60000 [==============================] - 4s 66us/sample - loss: 0.1481 - acc: 0.0994
Epoch 3/7
60000/60000 [==============================] - 4s 69us/sample - loss: 0.1097 - acc: 0.0990
Epoch 4/7
60000/60000 [==============================] - 4s 67us/sample - loss: 0.0875 - acc: 0.0991
Epoch 5/7
60000/60000 [==============================] - 4s 66us/sample - loss: 0.0770 - acc: 0.0989
Epoch 6/7
60000/60000 [==============================] - 4s 66us/sample - loss: 0.0665 - acc: 0.0991
Epoch 7/7
60000/60000 [==============================] - 4s 66us/sample - loss: 0.0598 - acc: 0.0988
10000/10000 [==============================] - 0s 36us/sample - loss: 0.0692 - acc: 0.0993

第二个代码:

Epoch 1/7
60000/60000 [==============================] - 4s 65us/sample - loss: 0.3002 - acc: 0.9119
Epoch 2/7
60000/60000 [==============================] - 4s 63us/sample - loss: 0.1458 - acc: 0.9561
Epoch 3/7
60000/60000 [==============================] - 4s 64us/sample - loss: 0.1066 - acc: 0.9680
Epoch 4/7
60000/60000 [==============================] - 4s 64us/sample - loss: 0.0875 - acc: 0.9727
Epoch 5/7
60000/60000 [==============================] - 4s 68us/sample - loss: 0.0738 - acc: 0.9768
Epoch 6/7
60000/60000 [==============================] - 4s 64us/sample - loss: 0.0662 - acc: 0.9787
Epoch 7/7
60000/60000 [==============================] - 4s 64us/sample - loss: 0.0575 - acc: 0.9817
10000/10000 [==============================] - 0s 35us/sample - loss: 0.0630 - acc: 0.9814

它看起来像丢失了更新,但acc不能从自定义丢失功能中学习

0 个答案:

没有答案