R中的任何简单的EigenFaces分类代码

时间:2016-10-23 18:49:37

标签: r pca

我只是一个新手R编码器,并受到使用PCA和特征脸技术来分类图像的启发。然而,大多数示例似乎都是在Python中,我宁愿继续在R中开发。

我已将剑桥灰度面部图像加载到400个样本x 10304列ImageData中,每列代表折叠出的112x92灰度像素值。我可以使用pixmapRGB绘制每个图像。

我执行PCA分析,并且相信我已经提取了特征值,但是当我从50个EigenFaces重构我的第一张图像时,它还有很长的路要走,更像是一个粗糙的EigenFace。

所以我不认为我正在处理我的Image意味着正确或正确缩放(我已尝试使用和不使用colmeans平均图像,并且prcomp没有Center = FALSE。

所以我真的在R

中寻找端到端的EigenFaces分类代码
cmeans = colMeans(TrainImages)
DisplayImage(cmeans, main = "Average Person")
ProcTrainData = TrainImages  # - cmeans

# Now PCA Analysis - Adjusted Tolerance to 0.125 to return ~50 PCs
PCAProcess = prcomp(ProcTrainData, center = TRUE, tol = 0.125)

# Analyse PCA results Results
par(mfrow = c(1, 2))
screeplot(PCAProcess)
devs = PCAProcess$sdev ^ 2 / sum(PCAProcess$sdev ^ 2)
plot(1 - devs, main = "Percent Variance Explained", type = "l")

EigenFaces = PCAProcess$rotation

# Project Training Data into PCA Eignevalue space
TrainPCAValues = ProcTrainData %*% EigenFaces

# Plot first ten EigenFaces
par(mfrow = c(2, 5))
par(oma = rep(2, 4), mar = c(0, 0, 3, 0))

for (i in 1:10) {
    DisplayImage(EigenFaces[, i], main = paste0("EF ", i))   #PCs from sample data
}
# ======== Recover the first Image by the use of PCA attributes and Eigen
# Images
Composite[1:ImageSize] = 0    # PCAProcess$center; 
for (iv in 1:50) {
    Composite = Composite + TrainPCAValues[1, iv] * EigenFaces[, iv]
}

DisplayImage(Composite)
DisplayImage(TrainImages[1, ])
DisplayImage(PCAProcess$center)

Eigen Faces Eigen Faces

生成的复合与原始的第一个样本 Generated Composite vs Original 1st Sample

1 个答案:

答案 0 :(得分:1)

稍微进步一点。 基本上我已经决定在prcomp调用之前忽略计算均值,而是使用prcomp来计算比例和中心:

enter code here# Adjusted Tolerance to 0.05 to return ~50 PCs
PCAProcess = prcomp(TrainImages,center = TRUE,scale. = TRUE   ,tol=0.05)
#
# Analyse PCA results Results
summary(PCAProcess)
par(mfrow=c(1, 2))
screeplot(PCAProcess)
devs = PCAProcess$sdev^2 / sum(PCAProcess$sdev^2)
plot(1-devs, main='Percent Variance Explained', type='l')
#
# The PCA Process will have reduced the Original Image Dimension 96x96 =    9216 down to ~50 
# The Rotated Data into ~50 dimension is in PCAProcess$x arrays   (
# The Eigen Rotatations of the original dimensionare captued in     PCAProcess$rotation 
#
# Looks like we can get away with use of 25 PCs  to get about 95% or varience 
EigenFaces = PCAProcess$rotation[,1:25];
# Plot first ten EigenFaces
par(mfrow=c(2, 5))
par(oma = rep(2, 4), mar=c(0, 0, 3, 0))
for (i in 1:10){
     im <- matrix(data=rev(EigenFaces[,i]), nrow=96, ncol=96)
 image(1:96, 1:96, im, col=gray((0:255)/255))
 }
#
# Training Reconstruction Matrix *just first 25 attributes in PCA space
ReconstructTraining = PCAProcess$x[,1:25]%*%t(EigenFaces)
#
# Need to unscale and uncentre back using the prcomp computed scale and centre
#
if(PCAProcess$scale != FALSE){
 ReconstructTraining <- scale(ReconstructTraining, center = FALSE ,scale=1/PCAProcess$scale)
}
if(all(PCAProcess$center != FALSE)){
    ReconstructTraining <- scale(ReconstructTraining, center = -1 * PCAProcess$center, scale=FALSE)
}
# ============================
#Recover the first Image by the use of PCA attributes and Eigen Images
#
par(mfrow=c(1, 2))
# Original Image 2
im <- matrix(data=rev(im.train[2,]), nrow=96, ncol=96)
image(1:96, 1:96, im, col=gray((0:255)/255))

RestoredImage <- matrix(data=rev(ReconstructTraining[2,]), nrow=96, ncol=96)
image(1:96, 1:96, RestoredImage, col=gray((0:255)/255))

与各种EigneFaces教程和论文相比,仍然不是特别好。因此,使用25个EigenFaces Original vs reconstructed

Python sklearn EigenFaces看起来比使用R好多了。所以我将继续使用Python进行机器学习,因为它似乎是更好的支持社区。