试图了解LSTM和不同的输入长度

时间:2019-10-04 06:32:36

标签: python numpy keras lstm

我试图了解LSTM网络是如何工作的:我大致基于https://machinelearningmastery.com/text-generation-lstm-recurrent-neural-networks-python-keras/作了一个简单的网络 但发现对单词的正面或负面使用。因此,使用bad.txt:

yuck
bad
no
bleh
never
hate
awful

和good.txt:

yum
good
yes
hooray
great
love

现在,至少对于这个小子集,我可以获得相当“准确”的正/负检测器(过拟合,我知道...)。当我输入“伟大”时,它表示肯定,而“糟糕”则表示否定。

但是,当我输入“ reat”时,它说是99%的否定,而“ wful”是99%的肯定。据我了解,每个“第二个字母”神经元都将寻找训练有素的东西的第二个字母,这可能就是原因。另外,是否可能是因为空格用于填充任何较短的单词,是否有更合适的方法来做到这一点?也就是说,是否存在更合适的方法来检测长度不同的事物,例如单词或音频片段?我了解LSTM可以与顺序/流数据一起使用,例如来自文本流文件的字符,如果将这种数据馈送到像这样的算法中,“可怕的”将被假的“糟糕的”抵消...纠正此错误的好方法?

# Load LSTM network and generate text
import sys
import numpy
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Dropout
from keras.layers import LSTM
from keras.callbacks import ModelCheckpoint
from keras.utils import np_utils
import time

# create mapping of unique chars to integers, and a reverse mapping
chars = " abcdefghijklmnopqrstuvwxyz"
char_to_int = dict((c, i) for i, c in enumerate(chars))
int_to_char = dict((i, c) for i, c in enumerate(chars))
## summarize the loaded data
#n_chars = len(raw_text)
n_vocab = len(chars)
#print ("Total Characters: "+ str(n_chars))
#print ("Total Vocab: "+ str(n_vocab))
## prepare the dataset of input to output pairs encoded as integers
#seq_length = 100
dataX = []
dataY = []
seq_length = 7
for i in range(1000):
    with open('bad.txt') as bad:
        for line in bad:
            print(line+' mapping to [1,0]')
            nums = [char_to_int[n] for n in line.lower() if n in chars]
            while len(nums) < seq_length:
                nums.append(0)# " "
            dataX.append(nums)
            dataY.append([1,0])
    with open('good.txt') as bad:
        for line in bad:
            print(line+' mapping to [0,1]')
            nums = [char_to_int[n] for n in line.lower() if n in chars]
            while len(nums) < seq_length:
                nums.append(0)# " "
            dataX.append(nums)
            dataY.append([0,1])

n_patterns = len(dataX)
print ("Total Patterns: "+ str(n_patterns))

X = numpy.reshape(dataX, (n_patterns, seq_length, 1))
# normalize
X = X / float(n_vocab)
print(X[0:10])
# one hot encode the output variable
y = numpy.array(dataY)#np_utils.to_categorical(dataY)
# define the LSTM model
model = Sequential()
model.add(LSTM(20, input_shape=(X.shape[1], X.shape[2])))
#model.add(Dropout(0.01))
model.add(Dense(y.shape[1], activation='softmax'))

model.compile(loss='categorical_crossentropy', optimizer='adam')
#This was step1.
filepath="simpleweights-improvement-{epoch:02d}-{loss:.4f}.hdf5"
checkpoint = ModelCheckpoint(filepath, monitor='loss', verbose=1, save_best_only=True, mode='min')
callbacks_list = [checkpoint]
model.fit(X, y, epochs=20, batch_size=128, callbacks=callbacks_list)

# pick a random seed
start = numpy.random.randint(0, len(dataX)-1)
pattern = dataX[start]
print ("Seed:")
print ("\"", ''.join([int_to_char[value] for value in pattern]), "\"")
# generate characters
while True:
    word = input('word: ')
    numberize = []
    for l in word:
        if l in char_to_int:
            numberize.append(char_to_int[l])
    while len(numberize) < seq_length:
        numberize.append(0) #this space, first
    print(numberize)
    numberize = numpy.reshape(numberize, (1, len(numberize), 1))
    numberize = numberize / float(n_vocab) #normalize!
    print(numberize)
    result = (model.predict(numberize,verbose=0))
    print(result)
    #Just one. could do many at a time.
    if result[0][1] > result[0][0]:
        print('positive')
    else:
        print('negative')

0 个答案:

没有答案