如何阻止Tensorflow在Spyder中使内核死亡?

时间:2018-09-03 19:27:15

标签: python tensorflow keras conv-neural-network spyder

我正在Windows 7 64bit和8GB RAM上运行带有Anaconda的Spyder。一切都已更新。我正在尝试从CNN教程中运行以下脚本,该脚本试图对猫和狗的图像进行分类:

from keras.models import Sequential
from keras.layers import Convolution2D
from keras.layers import MaxPooling2D
from keras.layers import Flatten
from keras.layers import Dense
from keras import optimizers
from keras import metrics

classifier = Sequential()

classifier.add(Convolution2D(32, 3, 3, input_shape = (64, 64, 3), activation = 'relu'))

classifier.add(MaxPooling2D(pool_size = (2, 2)))

classifier.add(Flatten())

classifier.add(Dense(output_dim = 64, activation = 'relu'))
classifier.add(Dense(output_dim = 1, activation = 'sigmoid'))

my_optimizer = optimizers.Nadam(lr=0.015, beta_1=0.9, beta_2=0.999, epsilon=None, schedule_decay=0.004)
classifier.compile(optimizer = my_optimizer, loss = 'mean_squared_error', metrics = [metrics.binary_accuracy])

from keras.preprocessing.image import ImageDataGenerator
train_datagen = ImageDataGenerator(
        rescale=1./255,
        shear_range=0.2,
        zoom_range=0.2,
        horizontal_flip=True)

test_datagen = ImageDataGenerator(rescale=1./255)

training_set = train_datagen.flow_from_directory(
        'dataset/training_set',
        target_size=(64, 64),
        batch_size=15,
        class_mode='binary')

test_set = test_datagen.flow_from_directory(
        'dataset/test_set',
        target_size=(64, 64),
        batch_size=15,
        class_mode='binary')

classifier.fit_generator(training_set,
        steps_per_epoch=800,
        epochs=25,
        validation_data=test_set,
        validation_steps=200)

一切似乎都可以正常工作,但是当到达最后一个classifier.fit_generator(...)命令时,内核死亡并重新启动。

我尝试卸载并重新安装keras,但仍然存在相同的问题。在内核消亡之前,我在控制台中看到的最后一件事是:

Found 8000 images belonging to 2 classes.
Found 2000 images belonging to 2 classes.
Epoch 1/25
Kernel died, restarting

能否请您帮助我了解问题所在以及如何解决? RAM似乎不是问题,因为在内核死之前,RAM甚至还没有达到100%。

0 个答案:

没有答案