我使用scikit-learn来创建文档的特征向量。 我的目标是使用这些特征向量创建二元分类器(Genderclassifier)。
我希望将k-top字作为特征,因此来自两个标签文档的k个最高计数字(k = 10 - > 20个特征,因为2个标签)
我的两个文件(label1document,label2document)都填充了这样的实例:
user:somename, post:"A written text which i use"
到目前为止,我的理解是我使用来自两个文档的所有实例的所有文本来创建一个带有计数的词汇表(两个标签的计数,所以我可以比较标签数据):
#These are my documents with all text
label1document = "car eat essen sleep sleep"
label2document = "eat sleep woman woman woman woman"
vectorizer = CountVectorizer(min_df=1)
corpus = [label1document,label2document]
#Here I create a Matrix with all the countings of the words from both documents
X = vectorizer.fit_transform(corpus)
问题1:我需要在fit_transform中添加什么来获取两个标签中计算最多的单词?
X_new = SelectKBest(chi2, k=2).fit_transform( ?? )
最后,我想要训练数据(实例),如下所示:
<label> <feature1 : value> ... <featureN: value>
问题2:我如何从那里开始获取此培训数据?
奥利弗
答案 0 :(得分:5)
import pandas as pd
# get the names of the features
features = vectorizer.get_feature_names()
# change the matrix from sparse to dense
df = pd.DataFrame(X.toarray(), columns = features)
df
将返回:
car eat essen sleep woman
0 1 1 1 2 0
1 0 1 0 1 4
然后获得最常用的术语:
highest_frequency = df.max()
highest_frequency.sort(ascending=False)
highest_frequency
将返回:
woman 4
sleep 2
essen 1
eat 1
car 1
dtype: int64
在DataFrame
中获得数据后,可以轻松按照您想要的格式进行操作,例如:
df.to_dict()
>>> {u'car': {0: 1, 1: 0},
u'eat': {0: 1, 1: 1},
u'essen': {0: 1, 1: 0},
u'sleep': {0: 2, 1: 1},
u'woman': {0: 0, 1: 4}}
df.to_json()
>>>'{"car":{"0":1,"1":0},"eat":{"0":1,"1":1},"essen":{"0":1,"1":0},"sleep":{"0":2,"1":1},"woman":{"0":0,"1":4}}'
df.to_csv()
>>>',car,eat,essen,sleep,woman\n0,1,1,1,2,0\n1,0,1,0,1,4\n'
以下是一些有用的documentation。