如何在keras中构建嵌入层

时间:2019-12-18 19:05:54

标签: python tensorflow machine-learning keras

我正在尝试遵循Francois Chollet的书中的其中一本,在Tensorflow中建立文本分类模型。我正在尝试首先创建一个嵌入层,但是在此阶段它一直在中断。

我的逻辑如下:

  • 以文本字符串列表开头为X,以整数列表开头为y。

  • 标记,矢量化和填充文本数据至最长序列长度

  • 将每个整数标签转换为一个热编码数组

  • 使用输入输入到嵌入层:
    • input_dim =唯一标记/单词的总和(在我的情况下为1499)
    • output_dim =嵌入向量的尺寸(从32开始)
    • input_length =最大序列的长度,填充序列的尺寸相同(在我的情况下为295)
  • 通过relu将结果嵌入到32个隐藏的单元密集层中
  • 将它们传递到3个具有softmax的隐藏单元密集层中,以预测3个类

有人可以向我解释我在这里错了吗?我以为我了解如何实例化嵌入层,但这不是正确的理解吗?

这是我的代码:

# read in raw data
df = pd.read_csv('text_dataset.csv')
samples = df.data.tolist() # list of strings of text
labels = df.sentiment.to_list() # list of integers

# tokenize and vectorize text data to prepare for embedding
tokenizer = Tokenizer()
tokenizer.fit_on_texts(samples)
sequences = tokenizer.texts_to_sequences(samples)
word_index = tokenizer.word_index
print(f'Found {len(word_index)} unique tokens.')

# setting variables
vocab_size = len(word_index) # 1499
# Input_dim: This is the size of the vocabulary in the text data.
input_dim = vocab_size # 1499
# This is the size of the vector space in which words will be embedded.
output_dim = 32 # recommended by tf
# This is the length of input sequences
max_sequence_length = len(max(sequences, key=len)) # 295
# train/test index splice variable
training_samples = round(len(samples)*.8)

# data = pad_sequences(sequences, maxlen=max_sequence_length) # shape (499, 295)
# keras automatically pads to maxlen if left without input
data = pad_sequences(sequences)

# preprocess labels into one hot encoded array of 3 classes ([1., 0., 0.])
labels = to_categorical(labels, num_classes=3, dtype='float32') # shape (499, 3)

# Create test/train data (80% train, 20% test)
x_train = data[:training_samples]
y_train = labels[:training_samples]
x_test = data[training_samples:]
y_test = labels[training_samples:]

model = Sequential()
model.add(Embedding(input_dim, output_dim, input_length=max_sequence_length))
model.add(Dense(32, activation='relu'))
model.add(Dense(3, activation='softmax'))
model.summary()

model.compile(optimizer='rmsprop',
              loss='categorical_crossentropy',
              metrics=['accuracy'])

model.fit(x_train,
          y_train,
          epochs=10,
          batch_size=32,
          validation_data=(x_test, y_test))

运行此命令时,出现此错误:

Found 1499 unique tokens.
Model: "sequential_23"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
embedding_21 (Embedding)     (None, 295, 32)           47968     
_________________________________________________________________
dense_6 (Dense)              (None, 295, 32)           1056      
_________________________________________________________________
dense_7 (Dense)              (None, 295, 3)            99        
=================================================================
Total params: 49,123
Trainable params: 49,123
Non-trainable params: 0
_________________________________________________________________
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-144-f29ef892e38d> in <module>()
     51           epochs=10,
     52           batch_size=32,
---> 53           validation_data=(x_test, y_test))

2 frames
/usr/local/lib/python3.6/dist-packages/keras/engine/training_utils.py in standardize_input_data(data, names, shapes, check_batch_axis, exception_prefix)
    129                         ': expected ' + names[i] + ' to have ' +
    130                         str(len(shape)) + ' dimensions, but got array '
--> 131                         'with shape ' + str(data_shape))
    132                 if not check_batch_axis:
    133                     data_shape = data_shape[1:]

ValueError: Error when checking target: expected dense_7 to have 3 dimensions, but got array with shape (399, 3)

要进行故障排除,我一直在注释掉各个图层以尝试查看发生了什么。我发现问题一直持续到第一层,这让我觉得我对嵌入层的了解不多。见下文:

model = Sequential()
model.add(Embedding(input_dim, output_dim, input_length=max_sequence_length))
# model.add(Dense(32, activation='relu'))
# model.add(Dense(3, activation='softmax'))
model.summary()

这将导致:

Found 1499 unique tokens.
Model: "sequential_24"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
embedding_22 (Embedding)     (None, 295, 32)           47968     
=================================================================
Total params: 47,968
Trainable params: 47,968
Non-trainable params: 0
_________________________________________________________________
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-150-63d1b96db467> in <module>()
     51           epochs=10,
     52           batch_size=32,
---> 53           validation_data=(x_test, y_test))

2 frames
/usr/local/lib/python3.6/dist-packages/keras/engine/training_utils.py in standardize_input_data(data, names, shapes, check_batch_axis, exception_prefix)
    129                         ': expected ' + names[i] + ' to have ' +
    130                         str(len(shape)) + ' dimensions, but got array '
--> 131                         'with shape ' + str(data_shape))
    132                 if not check_batch_axis:
    133                     data_shape = data_shape[1:]

ValueError: Error when checking target: expected embedding_22 to have 3 dimensions, but got array with shape (399, 3)

1 个答案:

答案 0 :(得分:1)

预计喀拉拉邦的密集层将采用仅二维尺寸[BATCH_SIZE, N]的平面输入。句子的嵌入层的输出具有3个补偿:[BS, SEN_LENGTH, EMBEDDING_SIZE]

有两种解决方法:

  1. 将嵌入层的输出平铺:model.add(Flatten())在第一个密集层之前;
  2. 使用卷积层(建议这样做):model.add(Conv1D(filters=32, kernel_size=8, activation='relu'))