我已经使用Sklearn完成了TFIDF,但问题是我不能用英语单词作为停止词因为我在马来西亚语(非英语)。我需要的是导入包含停用词列表的txt文件。
stopword.txt
saya
cintakan
awak
tfidf.py
from sklearn.feature_extraction.text import TfidfVectorizer
corpus = ['Saya benci awak',
'Saya cinta awak',
'Saya x happy awak',
'Saya geram awak',
'Saya taubat awak']
vocabulary = "taubat".split()
vectorizer = TfidfVectorizer(analyzer='word', vocabulary=vocabulary)
X = vectorizer.fit_transform(corpus)
idf = vectorizer.idf_
print dict(zip(vectorizer.get_feature_names(), idf))
答案 0 :(得分:2)
您可以加载特定停用词的列表,并将其作为参数传递给TfidfVectorizer
。在您的示例中:
from sklearn.feature_extraction.text import TfidfVectorizer
corpus = ['Saya benci awak',
'Saya cinta awak',
'Saya x happy awak',
'Saya geram awak',
'Saya taubat awak']
# HERE YOU DO YOUR MAGIC: you open your file and load the list of STOP WORDS
stop_words = [unicode(x.strip(), 'utf-8') for x in open('stopword.txt','r').read().split('\n')]
vectorizer = TfidfVectorizer(analyzer='word', stop_words = stop_words)
X = vectorizer.fit_transform(corpus)
idf = vectorizer.idf_
print dict(zip(vectorizer.get_feature_names(), idf))
使用stop_words输出:
{u'taubat': 2.09861228866811, u'happy': 2.09861228866811, u'cinta': 2.09861228866811, u'benci': 2.09861228866811, u'geram': 2.09861228866811}
没有stop_words参数的输出:
{u'benci': 2.09861228866811, u'taubat': 2.09861228866811, u'saya': 1.0, u'awak': 1.0, u'geram': 2.09861228866811, u'cinta': 2.09861228866811, u'happy': 2.09861228866811}
警告:我不会使用参数
vocabulary
,因为它告诉TfidfVectorizer
只关注其中指定的字词并且它是&#39通常很难意识到你想要考虑的所有单词,而不是说你要解雇的单词。因此,如果您从示例中删除vocabulary
参数,并将stop_words
参数添加到列表中,它将按预期工作。
答案 1 :(得分:0)
在Python3中,我建议采用以下过程来提取自己的停用词列表:
destination: (req, file, cb) => {
let dir = getDirImage();
mkdirp(dir).then(made => {
console.log(`File made on ${made}`);
cb(made)
});
}
with open('C:\\Users\\mobarget\\Google Drive\\ACADEMIA\\7_FeministDH for Susan\\Stop words Letters_improved.txt', 'r') as file:
my_stopwords=[file.read().replace('\n', ',')]