如何使用PCA减少尺寸

时间:2015-01-01 17:31:03

标签: opencv pca

输入:LBP功能从尺寸为75520的图像中提取,因此输入的LBP数据包含1行和75520列。

必需输出:在输入上应用PCA以减小尺寸,

目前我的代码看起来像是

void PCA_DimensionReduction(Mat &src, Mat &dst){

    int PCA_DIMENSON_VAL 40
    Mat tmp = src.reshape(1,1); //1 rows X 75520 cols
    Mat projection_result;
    Mat input_feature_vector;
    Mat norm_tmp;
    normalize(tmp,input_feature_vector,0,1,NORM_MINMAX,CV_32FC1);
    PCA pca(input_feature_vector,Mat(),CV_PCA_DATA_AS_ROW, PCA_DIMENSON_VAL);
    pca.project(input_feature_vector,projection_result);
    dst = projection_result.reshape(1,1);
}

基本上我使用此功能来匹配两个图像之间的相似性,但是我没有得到正确的结果,因为没有应用PCA。

任何帮助将不胜感激......

此致

...哈里斯

1 个答案:

答案 0 :(得分:3)

你必须从 lot 图像中收集特征向量,从中创建一个pca(离线),然后使用均值&用于投影的特征向量。

// let's say, you have collected 10 feature vectors a 30 elements.
// flatten them to a single row (reshape(1,1)) and push_back into a big Data Mat

Mat D(10,30,CV_32F); // 10 rows(features) a 30 elements
randu(D,0,10);       // only for the simulation here
cerr << D.size() << endl;
// [30 x 10]


// now make a pca, that will only retain 6 eigenvectors
// so the later projections are shortened to 6 elements:

PCA p(D,Mat(),CV_PCA_DATA_AS_ROW,6);
cerr << p.eigenvectors.size() << endl;
// [30 x 6]

// now, that the training step is done, we can use it to
// shorten feature vectors:
// either keep the PCA around for projecting:

// a random test vector, 
Mat v(1,30,CV_32F);
randu(v,0,30);

// pca projection:
Mat vp = p.project(v);

cerr << vp.size() << endl;
cerr << vp << endl;
// [6 x 1]
// [-4.7032223, 0.67155731, 15.192059, -8.1542597, -4.5874329, -3.7452228]


// or, maybe, save the pca.mean and pca.eigenvectors only, and do your own projection:

Mat vp2 = (v - mean) * eigenvectors.t();

cerr << vp2.size() << endl;
cerr << vp2 << endl;
//[6 x 1]
//[-4.7032223, 0.67155731, 15.192059, -8.1542597, -4.5874329, -3.7452228]

嗯,哦,这就是缺点:从4.4k的火车图像中计算一个pca,一个75k的特征元素将是一个美好的一天;)