有没有办法在此代码中保存内存?

时间:2019-07-18 04:12:02

标签: memory-management deep-learning conv-neural-network

我正在尝试使用Densenet201运行179544个图像。
我有64个ram内存和jupyter笔记本
这会破坏内存限制

我想知道这段代码中确切的内存使用位置
如果我可以保存它或在某个时候重置内存,那将是很棒的

j = 1
#for (train_index, valid_index) in skf.split(
#    df_train['img_file'], 
#    df_train['class']):
for train_index, valid_index in zip(train_indexes,valid_indexes):

    print("cleanup memory")
    traindf = df_train.iloc[train_index, :].reset_index()
    validdf = df_train.iloc[valid_index, :].reset_index()

    print("=========================================")
    print("====== K Fold Validation step => %d/%d =======" % (j,k_folds))
    print("=========================================")

    print("traindf->",traindf.shape,"valid_df->",validdf.shape)

    print(traindf.size)
    print(validdf.size)

    print("train_index",train_index,"test_index",valid_index)
    if(j >= 0 and j <= 8):

        train_generator = train_datagen.flow_from_dataframe(
            dataframe=traindf,
            directory=TRAIN_CROPPED_PATH,
            x_col='img_file',
            y_col='class',
            target_size= (IMAGE_SIZE, IMAGE_SIZE),
            color_mode='rgb',
            class_mode='categorical',
            batch_size=BATCH_SIZE,
            seed=SEED,
            shuffle=True
            )

        valid_generator = valid_datagen.flow_from_dataframe(
            dataframe=validdf,
            directory=TRAIN_CROPPED_PATH,
            x_col='img_file',
            y_col='class',
            color_mode='rgb',
            class_mode='categorical',
            batch_size=BATCH_SIZE,
            seed=SEED,
            shuffle=True
            )

        model_name = model_path + str(j) + '_'+ modelName+"_Aug"+'.hdf5'
        model_names.append(model_name)

        print("TRAIN_CROPPED_PATH:",TRAIN_CROPPED_PATH)
        print("model_name:",model_name)

        model = get_model()

        try:
            model.load_weights(model_name)
        except:
            pass

        print("model_path:",model_path)

        patient = 2
        callbacks = [
        EarlyStopping(monitor='val_loss', patience=patient, mode='min', verbose=1),
        ReduceLROnPlateau(monitor = 'val_loss', factor = 0.5, patience = patient / 2, min_lr=0.00001, verbose=1, mode='min'),
        ModelCheckpoint(filepath=model_name, monitor='val_loss', verbose=1, save_best_only=True, mode='min'),
        ]

        history = model.fit_generator(
            train_generator,
            steps_per_epoch=len(traindf.index) / BATCH_SIZE,
            epochs=epochs,
            validation_data=valid_generator,
            validation_steps=len(validdf.index) / BATCH_SIZE,
            verbose=1,
            shuffle=False,
           callbacks = callbacks
            )

    j+=1

我将其拆分为8折,它将运行8次。 但是使用32GB的ram内存,它甚至无法运行1倍

我想知道内存在内存中的确切位置, 当我可以释放内存并保存时

0 个答案:

没有答案