我首先将pdf转换为纯文本(我将其打印出来并且一切正常)然后当我尝试从NLTK运行word_tokenize()时,我得到一个UnicodeDecodeError。
尽管我尝试在纯文本上解码(' utf-8')。encode(' utf-8'),但我得到了这个错误。在追溯中,我注意到word_tokenize()中首先引发错误的代码行是plaintext.split(' \ n')。这就是为什么我试图通过在纯文本上运行split(' \ n')来重现错误,但仍然没有出现任何错误。
因此,我既不了解导致错误的原因,也不了解如何避免错误。
任何帮助都会非常感激! :)也许我可以通过更改pdf_to_txt文件中的内容来避免它?
这里是代码化的代码:
from cStringIO import StringIO
from nltk.corpus import stopwords
from nltk.tokenize import word_tokenize
import os
import string
from pdfminer.pdfinterp import PDFResourceManager, PDFPageInterpreter
from pdfminer.converter import TextConverter
from pdfminer.layout import LAParams
from pdfminer.pdfpage import PDFPage
stopset = stopwords.words('english')
path = 'my_folder'
listing = os.listdir(path)
for infile in listing:
text = self.convert_pdf_to_txt(path+infile)
text = text.decode('utf-8').encode('utf-8').lower()
print text
splitted = text.split('\n')
filtered_tokens = [i for i in word_tokenize(text) if i not in stopset and i not in string.punctuation]
这是我调用的方法,以便从pdf转换为txt:
def convert_pdf_to_txt(self, path):
rsrcmgr = PDFResourceManager()
retstr = StringIO()
codec = 'utf-8'
laparams = LAParams()
device = TextConverter(rsrcmgr, retstr, codec=codec, laparams=laparams)
fp = file(path, 'rb')
interpreter = PDFPageInterpreter(rsrcmgr, device)
password = ""
maxpages = 0
caching = True
pagenos=set()
for page in PDFPage.get_pages(fp, pagenos, maxpages=maxpages, password=password,caching=caching, check_extractable=True):
interpreter.process_page(page)
fp.close()
device.close()
ret = retstr.getvalue()
retstr.close()
return ret
这是我得到的错误的追溯:
Traceback (most recent call last):
File "/home/iammyr/opt/workspace/task-logger/task_logger/nlp/pre_processing.py", line 65, in <module>
obj.tokenizeStopWords()
File "/home/iammyr/opt/workspace/task-logger/task_logger/nlp/pre_processing.py", line 29, in tokenizeStopWords
filtered_tokens = [i for i in word_tokenize(text) if i not in stopset and i not in string.punctuation]
File "/usr/local/lib/python2.7/dist-packages/nltk/tokenize/__init__.py", line 93, in word_tokenize
return [token for sent in sent_tokenize(text)
[...]
File "/usr/local/lib/python2.7/dist-packages/nltk/tokenize/punkt.py", line 586, in _tokenize_words
for line in plaintext.split('\n'):
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc2 in position 9: ordinal not in range(128)
万分感谢你们! ;)
答案 0 :(得分:4)
你正在把一块非常好的Unicode字符串(后面)变成一堆无类字节,Python不知道如何处理,但是拼命地尝试应用ASCII编解码器。删除.encode('utf-8')
,你应该没事。