使用Tensorflow对象检测API,验证损失高(带有训练数据集)而训练损失低

时间:2020-08-04 14:16:33

标签: tensorflow object-detection-api tensorflow-model-garden

使用model_main.py脚本微调Faster RCNN模型时,我特意将评估数据集设置为与训练数据集(TF_DATA)相同,并希望在评估中看到与训练相同的损失。但是,评估损失(4000个纪元后):

Loss/BoxClassifierLoss/classification_loss = 20588.025
Loss/BoxClassifierLoss/localization_loss = 9474.761
Loss/RPNLoss/localization_loss = 0.10792526
Loss/RPNLoss/objectness_loss = 0.4256882
Loss/total_loss = 30063.021
loss = 30063.021

训练总损失为:

I0804 14:01:57.539440 139956088792960 basic_session_run_hooks.py:260] loss = 0.27122372, step = 4200

常量:

RESIZE_SHAPE = (300, 300)
EVALUATE_EVERY = 10000
EPOCHS = 100000

NMS_SCORE_THRESHOLD = 0.1
IOU_THRESHOLD = 0.7
IOU_THRESHOLD2 = 0.6
NMS_SCORE_THRESHOLD2 = 0.01
LR_INIT = 0.0001
BATCH_SIZE = 1
AUGMENTATIONS = ''''''

我的配置文件:

model {
  faster_rcnn {
    num_classes: 1
    image_resizer {
      fixed_shape_resizer {
        height: '''+str(RESIZE_SHAPE[0])+'''
        width: '''+str(RESIZE_SHAPE[1])+'''
      }
    }
    feature_extractor {
      type: 'faster_rcnn_resnet101'
      first_stage_features_stride: 16
    }
    first_stage_anchor_generator {
      grid_anchor_generator {
        scales: [0.25, 0.5, 1.0, 2.0]
        aspect_ratios: [0.5, 1.0, 2.0]
        height_stride: 16
        width_stride: 16
      }
    }
    first_stage_box_predictor_conv_hyperparams {
      op: CONV
      regularizer {
        l2_regularizer {
          weight: 0.0
        }
      }
      initializer {
        truncated_normal_initializer {
          stddev: 0.01
        }
      }
    }
    first_stage_nms_score_threshold: '''+str(NMS_SCORE_THRESHOLD)+'''
    first_stage_nms_iou_threshold: '''+str(IOU_THRESHOLD)+'''
    first_stage_max_proposals: 300
    first_stage_localization_loss_weight: 2.0
    first_stage_objectness_loss_weight: 1.0
    initial_crop_size: 14
    maxpool_kernel_size: 2
    maxpool_stride: 2
    second_stage_box_predictor {
      mask_rcnn_box_predictor {
        use_dropout: true
        dropout_keep_probability: 0.5
        fc_hyperparams {
          op: FC
          regularizer {
            l2_regularizer {
              weight: 0.0
            }
          }
          initializer {
            variance_scaling_initializer {
              factor: 1.0
              uniform: true
              mode: FAN_AVG
            }
          }
        }
      }
    }
    second_stage_post_processing {
      batch_non_max_suppression {
        score_threshold: '''+str(NMS_SCORE_THRESHOLD2)+'''
        iou_threshold: '''+str(IOU_THRESHOLD2)+'''
        max_detections_per_class: 100
        max_total_detections: 300
      }
      score_converter: SOFTMAX
    }
    second_stage_localization_loss_weight: 2.0
    second_stage_classification_loss_weight: 1.0
  }
}

train_config: {
  batch_size: '''+str(BATCH_SIZE)+'''
  optimizer {
    momentum_optimizer: {
      learning_rate: {
        manual_step_learning_rate {
          initial_learning_rate: '''+str(LR_INIT)+'''
          schedule {
            step: 900000
            learning_rate: '''+str(LR_INIT)+'''
          }
          schedule {
            step: 1200000
            learning_rate: '''+str(LR_INIT)+'''
          }
        }
      }
      momentum_optimizer_value: 0.9
    }
    use_moving_average: false
  }
  gradient_clipping_by_norm: 10.0
  fine_tune_checkpoint: "'''+MODEL_TO_USE+'''/model.ckpt"
  from_detection_checkpoint: true
  load_all_detection_checkpoint_vars: false

  '''+AUGMENTATIONS+'''
}

train_input_reader: {
  tf_record_input_reader {
    input_path: "'''+TF_DATA+'''" 
  }
  label_map_path: "'''+CLASS_LABELS+'''"
  shuffle: true 
}

eval_config: {
  num_examples: '''+str(len(test_dataset))+'''
  max_evals: '''+str(EPOCHS // EVALUATE_EVERY)+'''
  min_score_threshold: '''+str(NMS_SCORE_THRESHOLD2)+'''
}

eval_input_reader: {
  tf_record_input_reader {
    input_path: "'''+TF_DATA+'''" 
  }
  label_map_path: "'''+CLASS_LABELS+'''" 
}

为什么使用相同数据的培训和评估步骤的总损失有所不同?

当我仅使用legacy/train.py脚本时,在1000个历元之后我已经看到了合理的边界框

1 个答案:

答案 0 :(得分:0)

您的问题不可重复,因此很难找到问题的根本原因。

尽管如此,您应该了解网络的某些部分在训练和测试(验证)期间具有不同的行为。

第一个是仅在训练期间发生的辍学;但是,这不会产生更糟的结果。

第二个也是最重要的是批处理规范化,至少在PyTorch中,它使用当前的批处理统计信息来计算和更新批处理规范的运行值。相比之下,在测试中,它使用训练期间的累积统计信息,因此在训练和测试期间,它的确会产生不同的结果,尤其是在批处理量很小的情况下。

Related question.