输入用完了数据错误,但数据在那里

时间:2020-03-26 22:29:09

标签: python tensorflow machine-learning keras deep-learning

早上好,在运行下面的代码时尝试学习CNN并遇到问题。

from tensorflow.keras.layers import Flatten
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras.layers import Convolution2D
from tensorflow.keras.layers import MaxPooling2D
import pandas as pd
import numpy as np
import matplotlib.pyplot

%matplotlib inline

model = Sequential()
model.add(Convolution2D(32, 3, 3, input_shape=(64, 64, 3), activation='relu')
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Convolution2D(32, 3, 3, activation='relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Flatten())
model.add(Dense(units = 128, activation = 'relu'))
model.add(Dense(units = 1, activation = 'sigmoid'))

model.compile(optimizer = 'rmsprop', loss='mse', metrics=['accuracy'])

from tensorflow.keras.preprocessing.image import ImageDataGenerator

    train_datagen = ImageDataGenerator(
    rescale = 1./255,
    shear_range=0.2,
    zoom_range=0.2,
    horizontal_flip=True)

test_datagen = ImageDataGenerator(rescale=1./255)

training_set = train_datagen.flow_from_directory(
    r'C:\Users\Raj Mulati\Downloads\Dev\Machine Learning A-Z New\Part 8 - Deep Learning\Section 40 - 
Convolutional Neural Networks (CNN)\dataset\training_set',

    target_size=(64, 64),
    batch_size=32,
    class_mode='binary')

test_set = test_datagen.flow_from_directory(
        r'C:\\Users\Raj Mulati\\Downloads\\Dev\\Machine Learning A-Z New\Part 8 - Deep 
Learning\\Section 40 - Convolutional Neural Networks (CNN)\\dataset\\test_set',

    target_size=(64, 64),
    batch_size=32,
    class_mode='binary')

model.fit_generator(
    training_set,
    steps_per_epoch=8000,
    epochs=25,
    validation_data=test_set,
    validation_steps=2000
 )

我得到的错误是:

Found 8000 images belonging to 2 classes.

Found 2000 images belonging to 2 classes.
WARNING:tensorflow:sample_weight modes were coerced from
  ...
    to  
  ['...']
WARNING:tensorflow:sample_weight modes were coerced from
  ...
    to  
  ['...']
Train for 8000 steps, validate for 2000 steps
Epoch 1/25
 250/8000 [..............................] - ETA: 14:37 - loss: 0.2485 - accuracy: 0.5340WARNING:tensorflow:Your input ran out of data; interrupting training. Make sure that your dataset or generator can generate at least `steps_per_epoch * epochs` batches (in this case, 200000 batches). You may need to use the repeat() function when building your dataset.
<tensorflow.python.keras.callbacks.History at 0x234d9fec3c8>

1 个答案:

答案 0 :(得分:0)

第一步需要完整一批图像,也就是说,如果您的batch_size为32,则250个步骤后(250 * 32 = 8000),数据用完了。像这样设置您的steps_per_epochvalidation_steps

model.fit_generator(
    training_set,
    steps_per_epoch=8000//32,
    epochs=25,
    validation_data=test_set,
    validation_steps=2000//32
 )