我试图修改用于识别从mnist到数字生成任务的数字的代码。我收到值错误:
ValueError: Error when checking target: expected dense_2 to have 2 dimensions, but got array with shape (30, 1, 166)
如何使最终层适合此输出形状?
我已经将一些文本数据分为句子。 x_train
和x_test
是使用OCR软件创建的凌乱句子,y_train
和y_test
是相同的句子,但具有手动校对和纠正的功能。我想训练模型以查看常见错误并纠正它们。
我在这里和其他站点上都在寻找解决此问题的方法。似乎最适用于人们的解决方案是使用loss='sparse_categorical_crossentropy'
,但我已经在使用它。
这是我正在使用的代码:
# Import test and train sets
test_in = open("test.pkl", "rb")
test_set = pickle.load(test_in)
train_in = open("train.pkl", "rb")
train_set = pickle.load(train_in)
x_test, y_test = test_set[0], test_set[1]
x_train, y_train = train_set[0], train_set[1]
# Map all characters in both sets
chars = sorted(list(set("".join(x_train + y_train + x_test + y_test))))
chardict = dict((c, i + 1) for i, c in enumerate(chars))
rchardict = dict((i + 1, c) for i, c in enumerate(chars))
# Encode lists using mapping
temp_list = list()
for gloss in x_test:
encoded_gloss = [chardict[char] for char in gloss]
temp_list.append(encoded_gloss)
x_test = temp_list
temp_list = list()
for gloss in y_test:
encoded_gloss = [chardict[char] for char in gloss]
temp_list.append(encoded_gloss)
y_test = temp_list
temp_list = list()
for gloss in x_train:
encoded_gloss = [chardict[char] for char in gloss]
temp_list.append(encoded_gloss)
x_train = temp_list
temp_list = list()
for gloss in y_train:
encoded_gloss = [chardict[char] for char in gloss]
temp_list.append(encoded_gloss)
y_train = temp_list
# Pad all sentences
max_len = max([len(x) for x in x_train + y_train + x_test + y_test])
x_test = np.array(pad_sequences(x_test, maxlen=max_len, padding='post'))
x_test = np.reshape(x_test, (x_test.shape[0], 1, x_test.shape[1]))
y_test = np.array(pad_sequences(y_test, maxlen=max_len, padding='post'))
y_test = np.reshape(y_test, (y_test.shape[0], 1, y_test.shape[1]))
x_train = np.array(pad_sequences(x_train, maxlen=max_len, padding='post'))
x_train = np.reshape(x_train, (x_train.shape[0], 1, x_train.shape[1]))
y_train = np.array(pad_sequences(y_train, maxlen=max_len, padding='post'))
y_train = np.reshape(y_train, (y_train.shape[0], 1, y_train.shape[1]))
# Normalise to improve training speed
x_test = x_test/37.0
x_train = x_train/37.0
# Define the model
model = Sequential()
model.add(LSTM(128, input_shape=(x_test.shape[1:]), activation='relu', return_sequences=True))
model.add(Dropout(0.2))
model.add(LSTM(128, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(32, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(10, activation='softmax'))
opt = Adam(lr=1e-3, decay=1e-5)
# Compile and fit the model
model.compile(loss='sparse_categorical_crossentropy', optimizer=opt, metrics=['accuracy'])
model.fit(x_test, y_test, epochs=5, validation_data=(x_train, y_train))
我希望能够训练该模型,以便可以在看不见的句子上尝试它,并查看它是否过拟合,但我无法克服这一障碍。
编辑以包括完整的追溯:
Traceback (most recent call last):
File "/Users/adrian/PycharmProjects/WurzburgGlossParser/Rough Work.py", line 80, in <module>
model.fit(x_test[:30], y_test[:30], epochs=5, validation_data=(x_test[30:40], y_test[30:40]))
File"/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/keras/engine/training.py", line 952, in fit
batch_size=batch_size)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/keras/engine/training.py", line 789, in _standardize_user_data
exception_prefix='target')
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/keras/engine/training_utils.py", line 128, in standardize_input_data
'with shape ' + str(data_shape))
ValueError: Error when checking target: expected dense_2 to have 2 dimensions, but got array with shape (30, 1, 166)
答案 0 :(得分:2)
您需要从标签中删除尺寸为1
的尺寸:
y_test = np.squeeze(y_test, axis=1)
y_train = np.squeeze(y_train, axis=1)