字符串分类(修改)

时间:2018-05-31 09:07:52

标签: deep-learning lstm random-forest rnn

我正在研究一个问题,我有一些32514行的混乱字符“wewlsfnskfddsl ... eredsda”,每行长度为406个字符。我们需要预测他们属于哪个班级?这里的课程是1-12个书籍。

在互联网上搜索后,我尝试了以下内容。然而,我收到了一个错误。非常感谢你。

#code
y = ytrain.values
#ytrain = y.ravel()
y = to_categorical(y, num_classes=12)

 [[0. 0. 0. ... 0. 0. 0.]
 [0. 0. 0. ... 0. 0. 0.]
 [0. 0. 0. ... 0. 0. 0.]
 ...
 [0. 0. 0. ... 0. 1. 0.]
 [0. 0. 0. ... 0. 0. 0.]
 [0. 0. 0. ... 0. 0. 0.]]

X = X.reshape((1,32514,1))


# define model
model = Sequential()
model.add(LSTM(75, input_shape=(32514,1)))
model.add(Dense(12, activation='softmax'))
print(model.summary())
# compile model
model.compile(loss='sparse_categorical_crossentropy', optimizer='adam', metrics=['accuracy'])

# fit model
model.fit(X, y, epochs=100, verbose=2)

# save the model to file
model.save('model.h5')
# save the mapping
dump(mapping, open('mapping.pkl', 'wb'))

#(batch_size, input_dim)
#(batch_size, timesteps, input_dim)

####我收到以下错误:

_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
lstm_19 (LSTM)               (None, 75)                23100     
_________________________________________________________________
dense_13 (Dense)             (None, 12)                912       
=================================================================
Total params: 24,012
Trainable params: 24,012
Non-trainable params: 0
_________________________________________________________________
None
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-35-503a6273e5d0> in <module>()
      7 
      8 # fit model
----> 9 model.fit(X, y, epochs=100, verbose=2)
     10 
     11 # save the model to file

/usr/local/lib/python3.6/dist-packages/keras/models.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, **kwargs)
   1000                               initial_epoch=initial_epoch,
   1001                               steps_per_epoch=steps_per_epoch,
-> 1002                               validation_steps=validation_steps)
   1003 
   1004     def evaluate(self, x=None, y=None,

/usr/local/lib/python3.6/dist-packages/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, **kwargs)
   1628             sample_weight=sample_weight,
   1629             class_weight=class_weight,
-> 1630             batch_size=batch_size)
   1631         # Prepare validation data.
   1632         do_validation = False

/usr/local/lib/python3.6/dist-packages/keras/engine/training.py in _standardize_user_data(self, x, y, sample_weight, class_weight, check_array_lengths, batch_size)
   1478                                     output_shapes,
   1479                                     check_batch_axis=False,
-> 1480                                     exception_prefix='target')
   1481         sample_weights = _standardize_sample_weights(sample_weight,
   1482                                                      self._feed_output_names)

/usr/local/lib/python3.6/dist-packages/keras/engine/training.py in _standardize_input_data(data, names, shapes, check_batch_axis, exception_prefix)
    121                             ': expected ' + names[i] + ' to have shape ' +
    122                             str(shape) + ' but got array with shape ' +
--> 123                             str(data_shape))
    124     return data
    125 

ValueError: Error when checking target: expected dense_13 to have shape (1,) but got array with shape (12,)

2 个答案:

答案 0 :(得分:0)

机器学习&amp;用于文本分类的深度学习模型构建起来很复杂。这是一本可以帮助您入门的指南。

https://papers.nips.cc/paper/5782-character-level-convolutional-networks-for-text-classification.pdf

希望它有所帮助! : - )

答案 1 :(得分:0)

在我看来,你可以使用lstm来解决这个问题。 长期短期记忆(LSTM)单位(或块)是递归神经网络(RNN)层的构建单位

这些LSTM将帮助我们捕获顺序信息,并且通常用于我们想要学习数据中的顺序模式的情况

您可以使用字符级别LSTM解码此问题。

在此,您必须传递LSTM单元格中文本的每个字符。在最后一步,您将拥有一个真正的标签类

您可以使用交叉熵损失功能。

https://machinelearningmastery.com/develop-character-based-neural-language-model-keras/

这将为您提供完整的想法