如何将新数据转换为我的训练数据的PCA组件?

时间:2014-10-03 15:48:08

标签: python machine-learning scikit-learn pca

假设我有一些文本句子,我想用kmeans进行聚类。

sentences = [
    "fix grammatical or spelling errors",
    "clarify meaning without changing it",
    "correct minor mistakes",
    "add related resources or links",
    "always respect the original author"
]

from sklearn.feature_extraction.text import CountVectorizer
from sklearn.cluster import KMeans

vectorizer = CountVectorizer(min_df=1)
X = vectorizer.fit_transform(sentences)
num_clusters = 2
km = KMeans(n_clusters=num_clusters, init='random', n_init=1,verbose=1)
km.fit(X)

现在我可以预测新文本会落入哪个类中,

new_text = "hello world"
vec = vectorizer.transform([new_text])
print km.predict(vec)[0]

但是,我说应用PCA将10,000个功能减少到50个。

from sklearn.decomposition import RandomizedPCA

pca = RandomizedPCA(n_components=50,whiten=True)
X2 = pca.fit_transform(X)
km.fit(X2)

我不能再做同样的事情来预测新文本的集群,因为矢量化器的结果不再相关

new_text = "hello world"
vec = vectorizer.transform([new_text]) ##
print km.predict(vec)[0]
ValueError: Incorrect number of features. Got 10000 features, expected 50

那么如何将我的新文本转换为低维特征空间?

2 个答案:

答案 0 :(得分:6)

您希望在将新数据提供给模型之前使用pca.transform。这将使用您在原始数据上运行pca.fit_transform时安装的相同PCA模型来执行降维。然后,您可以使用拟合模型来预测此减少的数据。

基本上,将其视为适合一个大型模型,包括堆叠三个较小的模型。首先,您有一个CountVectorizer模型,用于确定如何处理数据。然后运行执行降维的RandomizedPCA模型。最后,您运行KMeans模型进行群集。当您适合模型时,您可以沿着堆栈向下移动每个模型。当你想做预测时,你也必须下去并应用每一个。

# Initialize models
vectorizer = CountVectorizer(min_df=1)
pca = RandomizedPCA(n_components=50, whiten=True)
km = KMeans(n_clusters=2, init='random', n_init=1, verbose=1)

# Fit models
X = vectorizer.fit_transform(sentences)
X2 = pca.fit_transform(X)
km.fit(X2)

# Predict with models
X_new = vectorizer.transform(["hello world"])
X2_new = pca.transform(X_new)
km.predict(X2_new)

答案 1 :(得分:3)

使用Pipeline

>>> from sklearn.cluster import KMeans
>>> from sklearn.decomposition import RandomizedPCA
>>> from sklearn.decomposition import TruncatedSVD
>>> from sklearn.feature_extraction.text import CountVectorizer
>>> from sklearn.pipeline import make_pipeline
>>> sentences = [
...     "fix grammatical or spelling errors",
...     "clarify meaning without changing it",
...     "correct minor mistakes",
...     "add related resources or links",
...     "always respect the original author"
... ]
>>> vectorizer = CountVectorizer(min_df=1)
>>> svd = TruncatedSVD(n_components=5)
>>> km = KMeans(n_clusters=2, init='random', n_init=1)
>>> pipe = make_pipeline(vectorizer, svd, km)
>>> pipe.fit(sentences)
Pipeline(steps=[('countvectorizer', CountVectorizer(analyzer=u'word', binary=False, decode_error=u'strict',
        dtype=<type 'numpy.int64'>, encoding=u'utf-8', input=u'content',
        lowercase=True, max_df=1.0, max_features=None, min_df=1,
        ngram_range=(1, 1), preprocessor=None, stop_words=None,...n_init=1,
    n_jobs=1, precompute_distances='auto', random_state=None, tol=0.0001,
    verbose=1))])
>>> pipe.predict(["hello, world"])
array([0], dtype=int32)

(显示TruncatedSVD因为RandomizedPCA将在即将发布的版本中停止处理文本频率矩阵;它实际上执行了SVD,而不是完整的PCA。)