Keras LSTM分类器使用实际值作为训练目标的官方示例?

时间:2018-11-01 00:07:19

标签: python neural-network keras lstm loss-function

根据Keras文档中的官方示例,按预期,使用categorical_crossentropy作为损失函数来训练堆叠LSTM分类器。 https://keras.io/getting-started/sequential-model-guide/#examples

但是y_train的值是使用numpy.random.random()进行种子设定的,该值输出实数,而不是0,1二进制分类(通常是)

引擎盖下的y_train值是否提升为0,1值?

您甚至可以针对0,1之间的实际值训练此损失函数吗?

然后如何计算accuracy

令人困惑..不?

from keras.models import Sequential
from keras.layers import LSTM, Dense
import numpy as np

data_dim = 16
timesteps = 8
num_classes = 10

# expected input data shape: (batch_size, timesteps, data_dim)
model = Sequential()
model.add(LSTM(32, return_sequences=True,
               input_shape=(timesteps, data_dim)))  # returns a sequence of vectors of dimension 32
model.add(LSTM(32, return_sequences=True))  # returns a sequence of vectors of dimension 32
model.add(LSTM(32))  # return a single vector of dimension 32
model.add(Dense(10, activation='softmax'))

model.compile(loss='categorical_crossentropy',
              optimizer='rmsprop',
              metrics=['accuracy'])

# Generate dummy training data
x_train = np.random.random((1000, timesteps, data_dim))
y_train = np.random.random((1000, num_classes))

# Generate dummy validation data
x_val = np.random.random((100, timesteps, data_dim))
y_val = np.random.random((100, num_classes))

model.fit(x_train, y_train,
          batch_size=64, epochs=5,
          validation_data=(x_val, y_val))

1 个答案:

答案 0 :(得分:1)

在此示例中,y_train和y_test不再是一次性编码,而是每个类的概率。因此,它仍然适用于交叉熵。并且我们可以将一键编码作为概率向量的特例。

y_train[0]
array([0.30172708, 0.69581121, 0.23264601, 0.87881279, 0.46294832,
       0.5876406 , 0.16881395, 0.38856604, 0.00193709, 0.80681196])