我正在通过以下方式从一组文本文件创建语料库:
newcorpus = PlaintextCorpusReader(corpus_root, '.*')
现在我希望以下列方式访问文件的文字:
text_bow = newcorpus.words("file_name.txt")
但是我收到以下错误:
UnicodeDecodeError: 'utf8' codec can't decode byte 0x96 in position 0: invalid start byte
有多个文件抛出错误。如何摆脱这个UnicodeDecodeError?
答案 0 :(得分:0)
要解决解码错误,请执行以下操作之一。
以字节为单位读取语料库文件,不要解码为unicode。
发现并使用用于该文件的编码。 (语料库doc 应该告诉你。)我怀疑它是Latin-1。
无论实际编码如何,都使用Latin-1。这将消除异常,即使结果字符串在没有原始内容时是错误的。
答案 1 :(得分:0)
首先,找到我们对您的文件编码的编码。也许尝试https://stackoverflow.com/a/16203777/610569或询问您的数据来源。
然后使用encoding=
中的PlaintextCorpusReader
参数,例如latin-1
:
newcorpus = PlaintextCorpusReader(corpus_root, '.*', encoding='latin-1')
来自代码https://github.com/nltk/nltk/blob/develop/nltk/corpus/reader/plaintext.py:
class PlaintextCorpusReader(CorpusReader):
"""
Reader for corpora that consist of plaintext documents. Paragraphs
are assumed to be split using blank lines. Sentences and words can
be tokenized using the default tokenizers, or by custom tokenizers
specificed as parameters to the constructor.
This corpus reader can be customized (e.g., to skip preface
sections of specific document formats) by creating a subclass and
overriding the ``CorpusView`` class variable.
"""
CorpusView = StreamBackedCorpusView
"""The corpus view class used by this reader. Subclasses of
``PlaintextCorpusReader`` may specify alternative corpus view
classes (e.g., to skip the preface sections of documents.)"""
def __init__(self, root, fileids,
word_tokenizer=WordPunctTokenizer(),
sent_tokenizer=nltk.data.LazyLoader(
'tokenizers/punkt/english.pickle'),
para_block_reader=read_blankline_block,
encoding='utf8'):