对于有监督的学习,我的矩阵大小非常大,因此只有某些模型同意使用它。我读到PCA可以在很大程度上帮助减少维数。
以下是我的代码:
def run(command):
output = subprocess.check_output(command, shell=True)
return output
f = open('/Users/ya/Documents/10percent/Vik.txt','r')
vocab_temp = f.read().split()
f.close()
col = len(vocab_temp)
print("Training column size:")
print(col)
#dataset = list()
row = run('cat '+'/Users/ya/Documents/10percent/X_true.txt'+" | wc -l").split()[0]
print("Training row size:")
print(row)
matrix_tmp = np.zeros((int(row),col), dtype=np.int64)
print("Train Matrix size:")
print(matrix_tmp.size)
# label_tmp.ndim must be equal to 1
label_tmp = np.zeros((int(row)), dtype=np.int64)
f = open('/Users/ya/Documents/10percent/X_true.txt','r')
count = 0
for line in f:
line_tmp = line.split()
#print(line_tmp)
for word in line_tmp[0:]:
if word not in vocab_temp:
continue
matrix_tmp[count][vocab_temp.index(word)] = 1
count = count + 1
f.close()
print("Train matrix is:\n ")
print(matrix_tmp)
print(label_tmp)
print(len(label_tmp))
print("No. of topics in train:")
print(len(set(label_tmp)))
print("Train Label size:")
print(len(label_tmp))
我希望将PCA应用于matrix_tmp,因为它的大小约为(202180x9984)。如何修改我的代码以包含它?
答案 0 :(得分:1)
import codecs
from sklearn.decomposition import TruncatedSVD
from sklearn.feature_extraction.text import CountVectorizer
with codecs.open('input_file', 'r', encoding='utf-8') as inf:
lines = inf.readlines()
vectorizer = CountVectorizer(binary=True)
X_train = vectorizer.fit_transform(lines)
perform_pca = False
if perform_pca:
n_components = 100
pca = TruncatedSVD(n_components)
X_train = pca.fit_transform(X_train)
1-使用sklearn中的可用验证器进行矢量化,生成稀疏矩阵,而不是具有大量零值的完整矩阵。
2-仅在需要时执行PCA
3-如果需要,可以使用矢量化器和pca的参数进行演奏。
答案 1 :(得分:0)
Scikit-learn提供了几种PCA实现。一个有用的是TruncatedSVD
。它的用法相当简单:
from sklearn.decomposition import TruncatedSVD
n_components=100
pca = TruncatedSVD(n_components)
matrix_reduced = pca.fit_transform(matrix_tmp)