我有这个函数,它提供了一个包含许多单词的文本文档。当遇到类似下面的单词时,它会窒息。如何指定正确的编码?我尝试编码='字符串','unicode'等和decode_error ='忽略'等但它不起作用。
co¤a co¤azo
def tokenize(text):
sentence = [text]
ngv2 = CountVectorizer(encoding='utf-8', analyzer='word', min_df=1, stop_words='english')
try:
ngv2.fit_transform(sentence)
except Exception:
print sentence
S = ngv2.get_feature_names()
ngw = CountVectorizer(analyzer='char_wb', ngram_range=(3, 7), min_df=1)
ngw.fit_transform(S)
return ngw.get_feature_names()
编辑:我更改了代码,以便跳过例外。将其归结为最简单的错误(抛出异常后跟错误输出的代码片段):
ngv2 = CountVectorizer(decode_error='replace', analyzer='word', min_df=1, stop_words='english')
try:
ngv2.fit_transform(sentence)
except Exception as e:
print sentence, e.message
打印输入'[] p [] p [':
empty vocabulary; perhaps the documents only contain stop words