我能够获取代码以吐出单词及其频率。但是,我只想使用scikit-learn来消除停用词。 nltk在我的工作场所不工作。有人对如何消除停用词有任何建议吗?
import pandas as pd
df = pd.DataFrame(['my big dog', 'my lazy cat'])
df
0
0 my big dog
1 my lazy cat
value_list = [row[0] for row in df.itertuples(index=False, name=None)]
value_list
['my big dog', 'my lazy cat']
from sklearn.feature_extraction.text import CountVectorizer
cv = CountVectorizer()
x_train = cv.fit_transform(value_list)
x_train
<2x5 sparse matrix of type '<class 'numpy.int64'>'
with 6 stored elements in Compressed Sparse Row format>
x_train.toarray()
array([[1, 0, 1, 0, 1],
[0, 1, 0, 1, 1]], dtype=int64)
cv.vocabulary_
{'my': 4, 'big': 0, 'dog': 2, 'lazy': 3, 'cat': 1}
x_train_sum = x_train.sum(axis=0)
x_train_sum
matrix([[1, 1, 1, 1, 2]], dtype=int64)
for word, col in cv.vocabulary_.items():
print('word:{:10s} | count:{:2d}'.format(word, x_train_sum[0, col]))
word:my | count: 2
word:big | count: 1
word:dog | count: 1
word:lazy | count: 1
word:cat | count: 1
with open('my-file.csv', 'w') as f:
for word, col in cv.vocabulary_.items():
f.write('{};{}\n'.format(word, x_train_sum[0, col]))
答案 0 :(得分:2)
您可以使用自定义的stop_words初始化CountVectorizer。例如,将my
和big
添加到stop_words中,只会在词汇表中留下cat
dog
lazy
:
stop_words=['my', 'big']
cv = CountVectorizer(stop_words=stop_words)
x_train = cv.fit_transform(value_list)
x_train.toarray()
array([[0, 1, 0], [1, 0, 1]], dtype=int64)
cv.vocabulary_
{'cat': 0, 'dog': 1, 'lazy': 2}