ValueError:图层权重形状(43,100)与提供的权重形状(412457,400)不兼容

时间:2020-01-21 12:00:09

标签: python machine-learning keras neural-network

我为该项目准备了一个小的数据集。它给出了

ValueError:图层权重形状(43,100)与提供的不兼容 体重形状(412457,400)

错误。我认为令牌生成器存在问题。

X和Y代表train_test_split

X = []
sentences = list(titles["title"])
for sen in sentences:
    X.append(preprocess_text(sen))

y = titles['Unnamed: 1']



X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.20, random_state=42)

此处的令牌生成器

tokenizer = Tokenizer(num_words=5000)
tokenizer.fit_on_texts(X_train)

X_train = tokenizer.texts_to_sequences(X_train)
X_test = tokenizer.texts_to_sequences(X_test)

vocab_size = len(tokenizer.word_index) + 1 #vocab_size 43

maxlen = 100

X_train = pad_sequences(X_train, padding='post', maxlen=maxlen)
X_test = pad_sequences(X_test, padding='post', maxlen=maxlen)

因此,我的预训练word2vec模型具有(412457,400)形状。

from numpy import array
from numpy import asarray
from numpy import zeros

from gensim.models import KeyedVectors
embeddings_dictionary = KeyedVectors.load_word2vec_format('drive/My Drive/trmodel', binary=True)

我使用了预先训练的word2vec模型而不是GloVe。 (vocab_size:43、100,来自embeddings_dictionary.vectors的权重)

from keras.layers.recurrent import LSTM

model = Sequential()
embedding_layer = Embedding(vocab_size, 100, weights=[embeddings_dictionary.vectors], input_length=maxlen , trainable=False)
model.add(embedding_layer)
model.add(LSTM(128))

model.add(Dense(1, activation='sigmoid'))
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['acc'])

ValueError:图层权重形状(43,100)与提供的不兼容 体重形状(412457,400)

1 个答案:

答案 0 :(得分:1)

如果要使用预训练的权重,则必须将approprite大小参数传递到Embedding层,以便它可以将预训练的权重矩阵分配给嵌入层的权重矩阵。

因此您必须进行以下操作:

embedding_layer = Embedding(412457, 400, weights=[embeddings_dictionary.vectors], input_length=maxlen , trainable=False)

在训练之前,您必须更改填充以使其符合Embedding层:

maxlen = 400

X_train = pad_sequences(X_train, padding='post', maxlen=maxlen)
X_test = pad_sequences(X_test, padding='post', maxlen=maxlen)