使用sklearn进行PCA预测和错误

时间:2018-07-11 21:27:00

标签: pandas scikit-learn data-science pca prediction

我想用sklearn在Python中使用PCA预测一些值。
首先,从数据中获取相关列,并将其命名为X表示要素,将Y命名为需要预测的要素。

Y = DF['Predict'].values
X = pd.DataFrame(data=scale(DF[X_cols]), columns=X_cols)

pca = PCA(n_components=NCOMPS)  #NCOMPS=min(len(X_cols, Num_samples)

X_reduced = pd.DataFrame(pca.fit_transform(X),
                         columns=['PC%i' % i for i in range(NCOMPS)])

我已经绘制了用PC数量解释方差的程度,所以我知道我提取了PC。 我想根据PC的数量绘制预测Y的误差。
我如何使用自己的预测能力?

最重要的是,我还想添加LOOCV,但是如果再次遇到问题,我想我会保留另一个问题。

最新编辑: 我尝试了此操作,但后来又做了十二个撤消/重做操作,使Spyder的编辑历史不再让我摆脱这种痛苦。

classifier = LogisticRegression()   
total_err = []   
for num_comps in range(1, NCOMPS):
    classifier.fit(X_reduced, Y)

    ypred = np.array(classifier.predict(X_reduced[:,:num_comps))
    Y = np.array(Y)
    total_err.append(abs(np.subtract(Y, ypred)))

哪里出错了? Console说'X每个样本有2个功能;预计30'

1 个答案:

答案 0 :(得分:0)

您只需要选择一个分类器/估计量并适合您的数据即可。

from sklearn.datasets import load_iris
from sklearn.dimensionality_reduction import PCA
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split

iris = load_iris()
X = iris.data
y = iris.target

pca = PCA()

X_reduced = pca.fit_transform(X)

rf = RandomForestClassifier()
X_train, X_test, y_train, y_test = train_test_split(X_reduced, y)
rf.fit(X_train, y_train)
rf.predict_proba(X_test)
array([[0. , 0.9, 0.1],
       [0. , 0.8, 0.2],
       [0.9, 0. , 0.1],
       [0. , 0.2, 0.8],
       [0. , 1. , 0. ],
       [1. , 0. , 0. ],
       [0. , 0.1, 0.9],
       [0. , 0.3, 0.7],
       [1. , 0. , 0. ],
       [0. , 0. , 1. ],
       [1. , 0. , 0. ],
       [0.9, 0.1, 0. ],
       [1. , 0. , 0. ],
       [0. , 1. , 0. ],
       [1. , 0. , 0. ],
       [0. , 0.9, 0.1],
       [0.9, 0. , 0.1],
       [1. , 0. , 0. ],
       [0. , 0.8, 0.2],
       [1. , 0. , 0. ],
       [0. , 0. , 1. ],
       [0. , 0. , 1. ],
       [0.1, 0.8, 0.1],
       [0. , 0.7, 0.3],
       [1. , 0. , 0. ],
       [0.9, 0.1, 0. ],
       [0. , 0.7, 0.3],
       [0. , 0.1, 0.9],
       [0. , 0.9, 0.1],
       [0. , 0.9, 0.1],
       [0. , 0. , 1. ],
       [0. , 0. , 1. ],
       [0. , 0.7, 0.3],
       [0. , 0. , 1. ],
       [0. , 0. , 1. ],
       [1. , 0. , 0. ],
       [1. , 0. , 0. ],
       [1. , 0. , 0. ]])
rf.score(X_test, y_test)
0.9736842105263158
pca.inverse_transform(X_test)
array([[5.7, 2.8, 4.5, 1.3],
       [5.7, 3. , 4.2, 1.2],
       [5.4, 3.9, 1.3, 0.4],
       [7.1, 3. , 5.9, 2.1],
       [6.4, 2.9, 4.3, 1.3],
       [5.7, 3.8, 1.7, 0.3],
       [6.4, 3.1, 5.5, 1.8],
       [7.7, 3. , 6.1, 2.3],
       [4.8, 3. , 1.4, 0.3],
       [6.9, 3.2, 5.7, 2.3],
       [5.2, 4.1, 1.5, 0.1],
       [4.6, 3.1, 1.5, 0.2],
       [4.9, 3.1, 1.5, 0.1],
       [6. , 2.2, 4. , 1. ],
       [5. , 3.4, 1.5, 0.2],
       [5.5, 2.4, 3.7, 1. ],
       [5. , 3.5, 1.3, 0.3],
       [5.5, 3.5, 1.3, 0.2],
       [6. , 2.2, 5. , 1.5],
       [4.8, 3. , 1.4, 0.1],
       [6.9, 3.1, 5.4, 2.1],
       [6.8, 3.2, 5.9, 2.3],
       [5.6, 3. , 4.5, 1.5],
       [5.6, 2.9, 3.6, 1.3],
       [5.1, 3.8, 1.6, 0.2],
       [4.3, 3. , 1.1, 0.1],
       [6.6, 2.9, 4.6, 1.3],
       [7.4, 2.8, 6.1, 1.9],
       [5.6, 3. , 4.1, 1.3],
       [5.8, 2.7, 4.1, 1. ],
       [6.5, 3. , 5.2, 2. ],
       [6.3, 2.9, 5.6, 1.8],
       [6.9, 3.1, 4.9, 1.5],
       [7.2, 3.2, 6. , 1.8],
       [7.2, 3.6, 6.1, 2.5],
       [5.4, 3.9, 1.7, 0.4],
       [5.1, 3.5, 1.4, 0.2],
       [5.8, 4. , 1.2, 0.2]])
y_test
array([1, 1, 0, 2, 1, 0, 2, 2, 0, 2, 0, 0, 0, 1, 0, 1, 0, 0, 2, 0, 2, 2,
       1, 1, 0, 0, 1, 2, 1, 1, 2, 2, 1, 2, 2, 0, 0, 0])