我根据我在互联网上找到的示例代码进行培训。测试的准确率为92%,检查点保存在目录中。同时(培训现在运行3天)我想创建我的预测代码,这样我就可以学到更多而不仅仅是等待。
这是我深入学习的第三天,所以我可能不知道自己在做什么。以下是我试图预测的方式:
代码有效,但结果不到90%。
以下是我创建模型的方法:
INPUT_LAYERS = 2
OUTPUT_LAYERS = 2
AMOUNT_OF_DROPOUT = 0.3
HIDDEN_SIZE = 700
INITIALIZATION = "he_normal" # : Gaussian initialization scaled by fan_in (He et al., 2014)
CHARS = list("abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ .")
def generate_model(output_len, chars=None):
"""Generate the model"""
print('Build model...')
chars = chars or CHARS
model = Sequential()
# "Encode" the input sequence using an RNN, producing an output of HIDDEN_SIZE
# note: in a situation where your input sequences have a variable length,
# use input_shape=(None, nb_feature).
for layer_number in range(INPUT_LAYERS):
model.add(recurrent.LSTM(HIDDEN_SIZE, input_shape=(None, len(chars)), init=INITIALIZATION,
return_sequences=layer_number + 1 < INPUT_LAYERS))
model.add(Dropout(AMOUNT_OF_DROPOUT))
# For the decoder's input, we repeat the encoded input for each time step
model.add(RepeatVector(output_len))
# The decoder RNN could be multiple layers stacked or a single layer
for _ in range(OUTPUT_LAYERS):
model.add(recurrent.LSTM(HIDDEN_SIZE, return_sequences=True, init=INITIALIZATION))
model.add(Dropout(AMOUNT_OF_DROPOUT))
# For each of step of the output sequence, decide which character should be chosen
model.add(TimeDistributed(Dense(len(chars), init=INITIALIZATION)))
model.add(Activation('softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
return model
在单独的文件predict.py
中,我导入此方法以创建我的模型并尝试预测:
...import code
model = generate_model(len(question), dataset['chars'])
model.load_weights('models/weights.204-0.20.hdf5')
def decode(pred):
return character_table.decode(pred, calc_argmax=False)
x = np.zeros((1, len(question), len(dataset['chars'])))
for t, char in enumerate(question):
x[0, t, character_table.char_indices[char]] = 1.
preds = model.predict_classes([x], verbose=0)[0]
print("======================================")
print(decode(preds))
我不知道问题所在。我的目录中有大约90个检查点,我根据准确性加载最后一个检查点。所有这些都由ModelCheckpoint
保存:
checkpoint = ModelCheckpoint(MODEL_CHECKPOINT_DIRECTORYNAME + '/' + MODEL_CHECKPOINT_FILENAME,
save_best_only=True)
我被困住了。我做错了什么?
答案 0 :(得分:1)
在您提供的回购中,培训和验证语句在被送入模型之前被反转(通常在seq2seq学习中完成)。
dataset = DataSet(DATASET_FILENAME)
如您所见,inverted
的默认值为True
,问题已反转。
class DataSet(object):
def __init__(self, dataset_filename, test_set_fraction=0.1, inverted=True):
self.inverted = inverted
...
question = question[::-1] if self.inverted else question
questions.append(question)
您可以尝试在预测期间反转句子。具体地,
x = np.zeros((1, len(question), len(dataset['chars'])))
for t, char in enumerate(question):
x[0, len(question) - t - 1, character_table.char_indices[char]] = 1.
答案 1 :(得分:0)
在predict.py文件中生成模型时:
model = generate_model(len(question), dataset['chars'])
是您的第一个参数与培训文件中的相同吗?或问题长度是动态的?如果是这样,您将生成不同的模型,因此您保存的检查点不起作用。
答案 2 :(得分:0)
它可能是传递的数组/ df的维数不匹配您调用的函数所期望的。当被调用方法预期单个维度时,请尝试ravel
关于您期望的单个维度