C ++中的k-NN示例问题(OpenCV)

时间:2015-09-22 09:05:34

标签: java c++ opencv knn code-translation

我在OpenCV文档中找到了以下使用k-NN的示例。现在我的任务是将以下代码转换为Java并对其进行一些更改,因为我的数据不是图像。 我很难理解示例中发生的事情。

首先看一下代码:

#include "ml.h"
#include "highgui.h"

int main( int argc, char** argv )
{
    const int K = 10;
    int i, j, k, accuracy;
    float response;
    int train_sample_count = 100;
    CvRNG rng_state = cvRNG(-1);
    CvMat* trainData = cvCreateMat( train_sample_count, 2, CV_32FC1 );
    CvMat* trainClasses = cvCreateMat( train_sample_count, 1, CV_32FC1 );
    IplImage* img = cvCreateImage( cvSize( 500, 500 ), 8, 3 );
    float _sample[2];
    CvMat sample = cvMat( 1, 2, CV_32FC1, _sample );
    cvZero( img );

    CvMat trainData1, trainData2, trainClasses1, trainClasses2;

    // form the training samples
    cvGetRows( trainData, &trainData1, 0, train_sample_count/2 );
    cvRandArr( &rng_state, &trainData1, CV_RAND_NORMAL, cvScalar(200,200), cvScalar(50,50) );

    cvGetRows( trainData, &trainData2, train_sample_count/2, train_sample_count );
    cvRandArr( &rng_state, &trainData2, CV_RAND_NORMAL, cvScalar(300,300), cvScalar(50,50) );

    cvGetRows( trainClasses, &trainClasses1, 0, train_sample_count/2 );
    cvSet( &trainClasses1, cvScalar(1) );

    cvGetRows( trainClasses, &trainClasses2, train_sample_count/2, train_sample_count );
    cvSet( &trainClasses2, cvScalar(2) );

    // learn classifier
    CvKNearest knn( trainData, trainClasses, 0, false, K );
    CvMat* nearests = cvCreateMat( 1, K, CV_32FC1);

    for( i = 0; i < img->height; i++ )
    {
        for( j = 0; j < img->width; j++ )
        {
            sample.data.fl[0] = (float)j;
            sample.data.fl[1] = (float)i;

            // estimate the response and get the neighbors' labels
            response = knn.find_nearest(&sample,K,0,0,nearests,0);

            // compute the number of neighbors representing the majority
            for( k = 0, accuracy = 0; k < K; k++ )
            {
                if( nearests->data.fl[k] == response)
                    accuracy++;
            }
            // highlight the pixel depending on the accuracy (or confidence)
            cvSet2D( img, i, j, response == 1 ?
                (accuracy > 5 ? CV_RGB(180,0,0) : CV_RGB(180,120,0)) :
                (accuracy > 5 ? CV_RGB(0,180,0) : CV_RGB(120,120,0)) );
        }
    }

    // display the original training samples
    for( i = 0; i < train_sample_count/2; i++ )
    {
        CvPoint pt;
        pt.x = cvRound(trainData1.data.fl[i*2]);
        pt.y = cvRound(trainData1.data.fl[i*2+1]);
        cvCircle( img, pt, 2, CV_RGB(255,0,0), CV_FILLED );
        pt.x = cvRound(trainData2.data.fl[i*2]);
        pt.y = cvRound(trainData2.data.fl[i*2+1]);
        cvCircle( img, pt, 2, CV_RGB(0,255,0), CV_FILLED );
    }

    cvNamedWindow( "classifier result", 1 );
    cvShowImage( "classifier result", img );
    cvWaitKey(0);

    cvReleaseMat( &trainClasses );
    cvReleaseMat( &trainData );
    return 0;
}

Link to the source

现在的问题。
1。什么是cvRNG的类型。在Java版本的OpenCV中找不到它
2。 CvMat sample = cvMat( 1, 2, CV_32FC1, _sample ); - 需要一个四字段构造函数,这在java中是不可用的。
3。为什么我需要形成训练样本?我该怎么做? Here提到“仅支持CV_ROW_SAMPLE数据布局”。这是什么意思?

除了答案之外,欢迎各种其他工作实例。=)

1 个答案:

答案 0 :(得分:3)

该示例首先生成随机训练数据。它构建了Nx2训练样本矩阵(N=100 2D点),以及相应的类标签(Nx1矩阵)。因此,样本具有&#34;行布局&#34;,每行是一个样本。

生成的数据分为两部分;在前半部(N/2)x2样本是从正态分布生成的,其中均值= 200,方差= 50(X和Y坐标),属于第一类class=1。同样,后半部分由X~N(300,50)生成,并标记为class=2

因此,您可以想象数据看起来像2D空间中的两个点直接相反。

接下来,我们创建K-最近邻分类器(示例中为K=10),然后将它们提供给我们的训练集。

然后代码循环遍历500x500范围内的点网格(即我们越过2D点[0,0], [0,1], ..., [1,0], [1,1], ... [499,499])。对于每个点,我们使用分类器来查找K-最近邻居(基于欧几里德距离)及其对应的类标签,以及预测网格点的标签(基于来自最近邻居的多数投票)。它计算了一种“信心”。测量(通过计算有多少K=10个最近邻居与预测的邻居具有相同的类别。)

我们将预测存储在与网格相同大小的图像中(500x500),颜色编码以表示类(12),颜色强度代表预测置信度

最后,它将原始数据样本绘制在图像顶​​部,点上带有真正的类标签,并显示生成的图像。

现在我没有运行完全相同的代码,但我想它会提供如下内容:

classification

我是在写的。这是我的代码,以防你感兴趣(我使用mexopencv,一个用于OpenCV的MATLAB包装工具箱):

% random training set generated from two normal distributions
N = 100;  % number of training samples
trainData = [randn(N/2,2)*50+200; randn(N/2,2)*50+300];
trainClass = int32([ones(N/2,1)*1; ones(N/2,1)*2]);

% kNN classifier
K = 10;
knn = cv.KNearest();
knn.train(trainData, trainClass);

% build grid of 2D points, predict and find K nearest neigbords
sz = [500 500];
[X,Y] = ndgrid(1:sz(1), 1:sz(2));
[pred,IDX] = knn.findNearest([X(:) Y(:)], K);

% compute prediction confidence
conf = sum(bsxfun(@eq, IDX, pred),2) ./ K;

% evaluate classifier on training set
acc = nnz(knn.predict(trainData) == trainClass) * 100 / N;

% plot (color-coded by class, transparency indicates confidence)
clr1 = lines(2);
clr2 = brighten(clr1, -0.6);
imagesc(ind2rgb(reshape(pred,sz), clr1), 'AlphaData',reshape(conf,sz))
hold on
scatter(trainData(:,1), trainData(:,2), [], clr2(trainClass,:), 'filled')
hold off; xlabel X; ylabel Y;
title(sprintf('kNN Classification Accuracy = %.1f%%',acc))