输入形状错误,将嵌入层添加到LSTM

时间:2019-10-22 05:05:49

标签: python keras lstm word-embedding

我试图在我的LSTM中添加一个预测字符的嵌入层。

我尝试添加这种格式的嵌入层

num_words_in_vocab = 83
max_sentence_length = 40


# build the model: a single LSTM
model = Sequential()
model.add(Embedding(num_words_in_vocab,128,input_length=max_sentence_length))
model.add(LSTM(256, return_sequences=True))
.
.
.

但是,keras会引发此错误

Error when checking input: expected embedding_8_input to have 2 dimensions, but got array with shape (36736, 40, 83)

我很困惑,因为在嵌入层中没有地方为数据集中的示例数量设置变量。而且我不确定如何重塑该数据集以使其与嵌入层一起使用。

这是我的完整代码。

# -*- coding: utf-8 -*-
#imports
import re
import sys
import numpy
import random
import requests
import numpy as np
import keras.backend as K
from keras import Input, Model
from keras.layers import Permute, multiply, Embedding
from keras.layers import LSTM
from keras.layers import Dense
from keras.layers import Dropout
from keras.utils import np_utils
from keras.models import Sequential
from keras.optimizers import RMSprop
from keras.callbacks import ModelCheckpoint
from sklearn.model_selection import train_test_split

#loading book data
html = requests.get("http://www.gutenberg.org/files/11/11-0.txt")
text = html.text
#removing some garbage
text = re.sub(r'[^\x00-\x7f]',r'', text)

#making the word plot, but not using it to train bc 57 chars is better than X,xxx words.
split_text = text.splitlines()

def cleanText(text):
  cleanWords = []
  for exerpt in text:
    if exerpt == '':
      pass
    else:
      cleanWords.append(exerpt)
  #take the clean words and make a LIST of clean words
  clean_word_list = []
  for exerpt in cleanWords:
    temp_list = exerpt.split()
    for word in temp_list:
      if word not in clean_word_list:
        clean_word_list.append(word)
      else:
        pass
  #init dict for counting top 50 words
  dict_prevelence = {}
  for exerpt in cleanWords:
    temp_list = exerpt.split()
    for word in temp_list:
      #if not in dict, add to dict_prevelence, else, increment val
      if word not in dict_prevelence:
        dict_prevelence[word] = 1
      else:
        dict_prevelence[word] += 1
  return clean_word_list, dict_prevelence

#cleaning up the alice in wonderland and getting unsorted prevelence dict
clean_word_list, dict_prevelence = cleanText(split_text)
#sorting dict
dict_prevelence = sorted(dict_prevelence.items(), key=lambda x: x[1], reverse=True)




processed_text = text

#getting list of unique chars
chars = sorted(list(set(processed_text)))
print('Total Unique Chars:', len(chars))
#making dicts so we can translate between the two
char_indices = dict((c, i) for i, c in enumerate(chars))
indices_char = dict((i, c) for i, c in enumerate(chars))

#cutting the text into strings of 100 chars, but incrementing them by 3 chars 
#each time b/c if we incremented by 1, 99% of the string would be the same and 
#it wouldn't train that fast.

#!!! I'm guessing this is knind of a good middle ground between using words and chars and the data,
#with words you get a lot more context from each, but with letters there isn't a huge overhead of empty 
#vectors!!!!!
maxlen = 40
step = 3
sentences = []
next_chars = []
for i in range(0, len(processed_text) - maxlen, step):
    sentences.append(processed_text[i: i + maxlen])
    next_chars.append(processed_text[i + maxlen])

#here we're making the empty data vectors
x = np.zeros((len(sentences), maxlen, len(chars)), dtype=np.bool)
y = np.zeros((len(sentences), len(chars)), dtype=np.bool)
#now we add each 'sentence' that overlaps by 3 as a data, after encoding it.
#so each x data entry is a 100 int number that corresponds to a slightly overlapping sentence I guess
#and each y data entry would be the NEXT char in that sentence if it continued.
for i, sentence in enumerate(sentences):
    for t, char in enumerate(sentence):
        x[i, t, char_indices[char]] = 1
    y[i, char_indices[next_chars[i]]] = 1

#add a thing here for test train split
X_train, X_test, y_train, y_test = train_test_split(x, y, test_size=0.33, shuffle=False)

print('X_train Data Shape:', X_train.shape)
print('y_train Data Shape:', y_train.shape)

num_words_in_vocab = 83
max_sentence_length = 40


# build the model: a single LSTM
model = Sequential()
model.add(Embedding(num_words_in_vocab,128,input_length=max_sentence_length))
model.add(LSTM(256, return_sequences=True))
model.add(Dropout(0.2))
model.add(Dense(num_words_in_vocab, activation='softmax'))

optimizer = RMSprop(lr=0.01)
model.compile(loss='categorical_crossentropy', optimizer=optimizer)
model.summary()

#putting in this dope thing called callbacks so we can save weights in case we die during training like we have been.
from keras.callbacks import ModelCheckpoint

# checkpoint
filepath="weights-improvement-{epoch:02d}-{val_accuracy:.2f}.hdf5"
checkpoint = ModelCheckpoint(filepath,  verbose=1, save_best_only=True, mode='max')

#TRAIN, THAT, MODEL!!
model.fit(X_train, y_train, validation_data=(X_test, y_test),epochs=25, batch_size=64,verbose=1)




Any help would be great!

1 个答案:

答案 0 :(得分:1)

关于样本数,在这种情况下,Keras会从输入数据形状X_train中自动推断出来。

就嵌入层的使用而言,其思想是将整数矩阵转换为向量。在您的情况下,似乎您实际上已经在填充“ x”的步骤中这样做了。相反,您可能需要考虑让嵌入层为每个索引计算一个向量。为此,我相信您会将“ x”修改为形状(num_of_sentences,num_of_chars_per_sentence),其中每个数据点的值都是该特定字符的char索引。

此外,您可能希望将LSTM return_sequences设置为“ False”。我相信您只是在寻找该层的最终结果。

我希望这会有所帮助。