OpenCV MLP UPDATE_WEIGHTS表现不佳?

时间:2019-03-04 22:26:42

标签: opencv mlp

我在尝试绘制MLP的学习曲线时遇到了这个问题,该曲线用于预测30,000个样本中的4个输出值。我想在每个训练时期后使用UPDATE_WEIGHTS输出错误。这样我就可以绘制图表并查看趋势。

训练网络并设置终止标准COUNT = 1000时,网络收到5%的错误。问题是,当我使用UPDATE_WEIGHTS来迭代地训练网络1时元时,错误没有收敛到相同的值或趋势相似。

我在下面提供了一个简单示例的代码,该示例说明了相同的UPDATE_WEIGHTS问题,以便您可以清楚地了解问题所在。该示例使用MLP来学习如何添加两个数字,并使用UPDATE_WEIGHTS nEpoch次数(network1)到网络的再训练以及使用终止条件COUNT = nEpochs(network2)来迭代地比较训练网络。

OpenCV 4.0.1
MacBook Pro 64位
Eclipse C ++

 // create train data
 int nTrainRows = 1000;
 cv::Mat trainMat(nTrainRows, 2, CV_32F);
 cv::Mat labelsMat(nTrainRows, 1, CV_32F);
 for(int i = 0; i < nTrainRows; i++) {
     double rand1 = rand() % 100;
     double rand2 = rand() % 100;
     trainMat.at<float>(i, 0) = rand1;
     trainMat.at<float>(i, 1) = rand2;
     labelsMat.at<float>(i, 0) = rand1 + rand2;
 }

 // create test data
 int nTestRows = 100;
 cv::Mat testMat(nTestRows, 2, CV_32F);
 cv::Mat truthsMat(nTestRows, 1, CV_32F);
 for(int i = 0; i < nTestRows; i++) {
     double rand1 = rand() % 100;
     double rand2 = rand() % 100;
     testMat.at<float>(i, 0) = rand1;
     testMat.at<float>(i, 1) = rand2;
     truthsMat.at<float>(i, 0) = rand1 + rand2;
 }

 // initialize network1 and set network parameters
 cv::Ptr<cv::ml::ANN_MLP > network1 = cv::ml::ANN_MLP::create();
 cv::Mat layersMat(1, 2, CV_32SC1);
 layersMat.col(0) = cv::Scalar(trainMat.cols);
 layersMat.col(1) = cv::Scalar(labelsMat.cols);
 network1->setLayerSizes(layersMat);
 network1->setActivationFunction(cv::ml::ANN_MLP::ActivationFunctions::SIGMOID_SYM);
 network1->setTermCriteria(cv::TermCriteria(cv::TermCriteria::COUNT + cv::TermCriteria::EPS, 1, 0));
 cv::Ptr<cv::ml::TrainData> trainData = cv::ml::TrainData::create(trainMat,cv::ml::ROW_SAMPLE,labelsMat,cv::Mat(),cv::Mat(),cv::Mat(),cv::Mat());
 network1->train(trainData);

 // loop through each epoch, one at a time, and compare error between the two methods
 for(int nEpochs = 2; nEpochs <= 20; nEpochs++) {
      // train network1 with one more epoch
      network1->train(trainData,cv::ml::ANN_MLP::UPDATE_WEIGHTS);
      cv::Mat predictions;
      network1->predict(testMat, predictions);
      double totalError = 0;
      for(int i = 0; i < nTestRows; i++)
          totalError += abs( truthsMat.at<float>(i, 0) - predictions.at<float>(i, 0) );
      double aveError = totalError / (double) nTestRows;

      //recreate network2 
      cv::Ptr<cv::ml::ANN_MLP > network2 = cv::ml::ANN_MLP::create();
      network2->setLayerSizes(layersMat);
      network2->setActivationFunction(cv::ml::ANN_MLP::ActivationFunctions::SIGMOID_SYM);
      network2->setTermCriteria(cv::TermCriteria(cv::TermCriteria::COUNT + cv::TermCriteria::EPS, nEpochs, 0));

      // train network2 from scratch, specifying to train with nEpochs
      network2->train(trainData);
      network2->predict(testMat, predictions);
      totalError = 0;
      for(int i = 0; i < nTestRows; i++) 
          totalError += abs( truthsMat.at<float>(i, 0) - predictions.at<float>(i, 0) );
      aveError = totalError / (double) nTestRows;
 }

我绘制了平均误差与使用的训练时期数的关系图: enter image description here

您可以看到,即使训练次数相同,network1(使用UPDATE_WEIGHTS)和network2(使用COUNT)的行为也大不相同。来自network2的错误收敛速度更快,而network1则以较高的误差收敛。我找不到原因,因为它们应该相同吗?

-蒂姆

0 个答案:

没有答案