以下是PCA的简单测试代码:
import numpy as np
from sklearn.decomposition import PCA
data = np.array([[1, 2, 3, 4, 5],
[2, 3, 4, 5, 6]])
pca = PCA()
newData = pca.fit_transform(data)
eigenvalue, eigenvector = np.linalg.eig(np.cov(data.transpose()))
print eigenvalue
print eigenvector
结果如下:
[ 2.50000000e+00 -1.23259516e-32 0.00000000e+00 0.00000000e+00
0.00000000e+00]
[[ -4.47213595e-01 1.18714153e-16 2.46911666e-16 6.66612515e-33
6.66612515e-33]
[ -4.47213595e-01 8.66025404e-01 8.66025404e-01 6.66133815e-17
2.49800181e-17]
[ -4.47213595e-01 -2.88675135e-01 -2.88675135e-01 -5.77350269e-01
-5.77350269e-01]
[ -4.47213595e-01 -2.88675135e-01 -2.88675135e-01 7.88675135e-01
-2.11324865e-01]
[ -4.47213595e-01 -2.88675135e-01 -2.88675135e-01 -2.11324865e-01
7.88675135e-01]]
你能告诉我做这种PCA或KernelPCA是什么意思吗?如果我们真的要处理这类问题,如何取得更好的结果?