检查目标时出错:预期density_1的形状为(1,),但数组的形状为(12,)

时间:2019-07-10 07:49:02

标签: tensorflow machine-learning keras deep-learning lstm

我正在尝试创建LSTM模型。我的数据形状(23931,7)。 我为模型标题['Train']和标题['Label']选择了两列。 我正在关注两个教程 这是a linka link

请帮我了解为什么这行不通。

当我运行它时,出现以下错误:

ValueError: Error when checking target: expected dense_1 to have shape (1,) but got array with shape (12,)

X_train_pad.shape(2839,24)t_train_pad.shape(2839,24,14968)

import pandas as pd
import numpy as np
import gensim,logging
from nltk import word_tokenize
import string
import re

from tensorflow.python.keras.preprocessing.text import Tokenizer
from tensorflow.python.keras.preprocessing.sequence import pad_sequences
from keras.models import Sequential
from keras.layers import Dense, Embedding, LSTM, GRU, Dropout
from keras.layers.embeddings import Embedding
from keras.models import model_from_yaml
from keras.utils import plot_model, to_categorical
from IPython.display import SVG
from keras.utils.vis_utils import model_to_dot

logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
title = pd.read_excel('../file_name.xlsx')

x_train = title.loc[:2838,'Train']
y_train = title.loc[:2838,'Label']

x_test = title.loc[2839:,'Train']
y_test = title.loc[2839:,'Label']
x_train = x_train.apply(clean_text)
x_train = x_train.apply(word_tokenize)

def clean_text(text):
    text = text.lower()
    text = text.translate(string.punctuation)
    text = text.strip()
    text = re.sub(r'[?|!|\'|"|#]',r'',text)
    text = re.sub(r'[.|,|)|(|\|/]',r' ',text)
    return text

x_train = x_train.apply(clean_text)
x_train = x_train.apply(word_tokenize)

x_test = x_test.apply(clean_text)
x_test = x_test.apply(word_tokenize)

y_train = y_train.apply(clean_text)
y_train = y_train.apply(word_tokenize)

tockenizer = Tokenizer()
max_length = max([len(s.split()) for s in title['Название АСНА']])
tockenizer.fit_on_texts(title['Название АСНА'])
vocab_size = len(tockenizer.word_index) + 1

X_train_tokens = tockenizer.texts_to_sequences(x_train)
X_test_tokens = tockenizer.texts_to_sequences(x_test)

X_train_pad = pad_sequences(X_train_tokens, maxlen = max_length, padding='post')
X_test_pad = pad_sequences(X_test_tokens, maxlen = max_length, padding='post')

y_train_tokens = tockenizer.texts_to_sequences(y_train)
y_train_pad = pad_sequences(y_train_tokens, maxlen = max_length, padding='post')
y_train_label = to_categorical(y_train_pad, num_classes=vocab_size)

model = Sequential()
model.add( Embedding(vocab_size,EMBEDDING_DIM, input_length=max_length))
model.add(LSTM(256))
model.add(Dropout(0.1))
model.add(Dense(vocab_size, activation='sigmoid'))
model.compile(loss='sparse_categorical_crossentropy', optimizer='adam')
print('Train...')
model.fit(X_train_pad, y_train_pad, batch_size=128, epochs=100, verbose=2)

1 个答案:

答案 0 :(得分:0)

如果使用sparse_categorical_crossentropy损失,则需要提供整数标签(而不是单编码的标签)。由于您已经对标签进行了一次热编码,因此应该使用categorical_crossentropy损失:

model.compile(loss='categorical_crossentropy', optimizer='adam')