为什么我得到“IndexError:元组索引超出范围”?

时间:2017-02-02 23:23:06

标签: python machine-learning tensorflow neural-network deep-learning

当我运行此代码时,我在第53行收到错误“IndexError:元组索引超出范围。”当我尝试将Out.shape函数从1和2更改为0和1时,此错误消失但LSTM模型不起作用,因为它需要3-D数组,而0和1,我输入2 -D数组?我该如何解决这个问题?为什么我遇到这个问题?谢谢!

    import numpy
    from keras.models import Sequential
    from keras.layers import Dense
    from keras.layers import Dropout
    from keras.layers import LSTM
    from keras.callbacks import ModelCheckpoint
    from keras.utils import np_utils

    #Load in text file and read it into raw_text data.
    filename = "berkuttext.txt"
    raw_text = open(filename).read()
    #Convert all text to lowercase to reduce the vocabulary network
    #needs to learn.
    raw_text = raw_text.lower()

    #Create set of distinct characters from text.
    chars = sorted(list(set(raw_text)))
    #Convert characters into unique integers.
    charsToInt = dict((c, i) for i, c in enumerate(chars))

    #Print each character in text and the integer assigned to it.
    print charsToInt

    #Find the total amount of characters, and total amount of
    #unique characters and print the result.
    nChars = len(raw_text)
    nVocab = len(chars)
    print "Total Characters:", nChars
    print "Total Vocab:", nVocab

    #Preparing dataset of input to output pairs, encoded as integers.
    seqLength = 10
    dataIn = []
    dataOut = []
    for i in range(0, nChars - seqLength, 1):
        seqIn = raw_text[i:i + seqLength]
        seqOut = raw_text[i + seqLength]
        dataIn.append([charsToInt[chars] for chars in seqIn])
        dataOut.append(charsToInt[seqOut])
    nPatterns = len(dataOut)
    print "Total Patterns:  ", nPatterns

    #Reshape (reshape what??) into array of integers.
    In = numpy.reshape(dataIn, (nPatterns, seqLength), 1)
    #Normalize array of intgers into floats between 0 and 1.
    In = In / float(nVocab)
    #One hot encode the output variable, array only 0's and an 1
    #representing the correct output variable.
    Out = np_utils.to_categorical(dataOut)

    #Define the LSTM model
    model = Sequential()

以下行是发生错误的地方--->>>>

    model.add(LSTM(256, input_shape=(In.shape[2], In.shape[1]), return_sequences=True))                     
    model.add(Dropout(0.2))
    model.add(LSTM(256))
    model.add(Dropout(0.2))
    model.add(Dense(Out.shape[1], activation='softmax'))
    model.compile(loss='categorical_crossentropy', optimizer='adam')

    #Define the checkpoint, which records all network weights to file
    #each time an improvement in loss is observed at the end of the epoch.
    filepath = "weights-improvement-{epoch:02d}-{loss:.4d}.hdf5"
    checkpoint = ModelCheckpoint(filepath, monitor='loss', verbose=1, save_best_only=True, mode='min')
    callbacksList = [checkpoint]

    #Fitting data to the LSTM model.
    model.fit(In, Out, nb_epoch=20, batch_size=128, callbacks=callbacksList)

0 个答案:

没有答案