分配shape []的张量时的ResourceExhaustedError并键入float Keras

时间:2017-11-21 04:17:40

标签: python deep-learning keras

我的输入是299,299,3

我的显卡是1070(8演唱会)

其他规格:Python 3.6,Keras 2.xx,Tensorflow-backend(1.4),Windows 7

即使批量大小为1也无法正常工作。

我觉得我的卡应该处理一批大小的 -

这是我的代码:

   def full_model():
    #model layers
    input_img = Input(shape=(299, 299, 3))

    tower_1 = Conv2D(64, (1, 1,), padding='same', activation='relu')(input_img)
    tower_1 = Conv2D(64, (3, 3), padding='same', activation='relu')(tower_1)

    tower_2 = Conv2D(64, (1, 1), padding='same', activation='relu')(input_img)
    tower_2 = Conv2D(64, (5, 5), padding='same', activation='relu')(tower_2)

    concatenated_layer = keras.layers.concatenate([tower_1, tower_2], axis=3)

    bottleneck = MaxPooling2D((2, 2), strides=(2, 2), padding='same')(concatenated_layer)
    flatten = Flatten()(bottleneck)
    dense_1 = Dense(500, activation = 'relu')(flatten)
    predictions = Dense(12, activation = 'softmax')(dense_1)


    model = Model(inputs= input_img, output = predictions)
    SGD =keras.optimizers.SGD(lr=0.1, momentum=0.0, decay=0.0, nesterov=False)
    model.compile(optimizer=SGD,
                  loss='categorical_crossentropy',
                  metrics=['accuracy'])

    return model




hdf5_path =r'C:\Users\Moondra\Desktop\Keras Applications\training.hdf5' 
model = full_model()


def run_model( hdf5_path,
               epochs = 10,
               steps_per_epoch =8,
               classes =12,
               batch_size =1, model= model ):



    for i in range(epochs):
        batches = loading_hdf5_files.load_batches(batch_size =1,
                                                  hdf5_path=hdf5_path ,
                                                  classes = classes)    
        for i in range(steps_per_epoch):
            x,y = next(batches)
            #plt.imshow(x[0])
            #plt.show()
            x = (x/255).astype('float32')
            print(x.shape)
            data =model.train_on_batch(x,y)
            print('loss : {:.5},  accuracy :  {:.2%}'.format(*data))

    return model

我似乎无法处理一批大小的。

以下是错误的最后一部分:

ResourceExhaustedError (see above for traceback): OOM when allocating tensor of shape [] and type float
     [[Node: conv2d_4/random_uniform/sub = Const[dtype=DT_FLOAT, value=Tensor<type: float shape: [] values: 0.0866025388>, _device="/job:localhost/replica:0/task:0/device:GPU:0"]()]]

1 个答案:

答案 0 :(得分:1)

事实证明我的参数太多了。

运行MaxPooling后,我有十亿个参数。

我增加了@user.stripe_customer_id的大小,没有更多问题。