在keras中训练LSTM模型时面临一些问题

时间:2019-03-01 01:25:10

标签: machine-learning keras lstm

这段代码给我错误

  data_generator中的

ipython-input-99-b865826eb12b>(input_set,img_pretrained,batch_size)        61        62下一个= np.zeros(VOCABULARY_SIZE)   ---> 63 next [words_to_token [text_list [i + 1]]] = 1#一个热门        64        65 next_words.append(next)

     

IndexError:索引3000超出了大小为1000的轴0的边界

Image Captioning那里获得了代码

batch_size = 128
training_size = get_no_samples(cap_words_cleaned_more, img_features_training)
predictive.compile(loss='categorical_crossentropy', optimizer=RMSprop(), metrics=['accuracy'])

if not DEMO:
    file_name = 'Models/image_caption/checkpoint/weights-improvement-{epoch:02d}-{loss:2.5f}.h5'
    checkpoint = ModelCheckpoint(file_name, monitor='loss', verbose=1, save_best_only=False, mode='min')
    predictive.fit_generator(data_generator(cap_words_cleaned_more, img_features_training,batch_size=batch_size), 
                         steps_per_epoch=training_size/batch_size,
                         nb_epoch=30, 
                         verbose=1,
                        callbacks=[checkpoint])

0 个答案:

没有答案