如何修复UnicodeDecodeError:'ascii'编解码器无法解码字节?

时间:2017-05-07 16:25:47

标签: python-2.7 unicode beautifulsoup spacy

UnicodeDecodeError: 'ascii' codec can't decode byte 0xc2 in position 0: ordinal not in range(128)

这是我在尝试清理我从html页面使用spaCy提取的名称列表时遇到的错误。

我的代码:

import urllib
import requests
from bs4 import BeautifulSoup
import spacy
from spacy.en import English
from __future__ import unicode_literals
nlp_toolkit = English()
nlp = spacy.load('en')

def get_text(url):
    r = requests.get(url)
    soup = BeautifulSoup(r.content, "lxml")

    # delete unwanted tags:
    for s in soup(['figure', 'script', 'style']):
        s.decompose()

    # use separator to separate paragraphs and subtitles!
    article_soup = [s.get_text(separator="\n", strip=True) for s in soup.find_all( 'div', {'class': 'story-body__inner'})]

    text = ''.join(article_soup)
    return text

# using spacy
def get_names(all_tags):
    names=[]
    for ent in all_tags.ents:
        if ent.label_=="PERSON":
            names.append(str(ent))
    return names

def cleaning_names(names):
    new_names = [s.strip("'s") for s in names] # remove 's' from names
    myset = list(set(new_names)) #remove duplicates
    return myset

def main():
    url = "http://www.bbc.co.uk/news/uk-politics-39784164"
    text=get_text(url)
    text=u"{}".format(text)
    all_tags = nlp(text)
    names = get_person(all_tags)
    print "names:"
    print names
    mynewlist = cleaning_names(names)
    print mynewlist

if __name__ == '__main__':
    main()

对于这个特定的URL,我得到的名称列表包括£或$:

等字符
  

['Nick Clegg','Brexit','\ xc2 \ xa359bn','Theresa May','Brexit',   “英国退欧”,“克莱格先生”,“克莱格先生”,“克莱格先生”,“英国退欧”,“克莱格先生”,   'Theresa May']

然后是错误:

Traceback (most recent call last) <ipython-input-19-8582e806c94a> in <module>()
     47 
     48 if __name__ == '__main__':
---> 49     main()

<ipython-input-19-8582e806c94a> in main()
     43     print "names:"
     44     print names
---> 45     mynewlist = cleaning_names(names)
     46     print mynewlist
     47 

<ipython-input-19-8582e806c94a> in cleaning_names(names)
     31 
     32 def cleaning_names(names):
---> 33     new_names = [s.strip("'s") for s in names] # remove 's' from names
     34     myset = list(set(new_names)) #remove duplicates
     35     return myset

UnicodeDecodeError: 'ascii' codec can't decode byte 0xc2 in position 0: ordinal not in range(128)

我尝试了修复unicode的不同方法(包括sys.setdefaultencoding('utf8')),没有任何效果。我希望之前有人有同样的问题,并且能够建议修复。谢谢!

3 个答案:

答案 0 :(得分:1)

当您使用'ascii'编解码器获得解码错误时,通常表明在需要Unicode字符串的上下文中使用了字节字符串(在Python 2中,Python 3不允许使用在所有)。

由于您已导入from __future__ import unicode_literals,因此字符串"'s"为Unicode。这意味着您尝试strip的字符串也必须是Unicode字符串。修复此问题,您将不再收到错误。

答案 1 :(得分:1)

正如@MarkRansom评论的那样,忽略非ascii角色会让你反感。

首先看看

另外,请注意这是一种反模式:Why should we NOT use sys.setdefaultencoding("utf-8") in a py script?

最简单的解决方案就是使用Python3,这样可以减少一些痛苦

>>> import requests
>>> from bs4 import BeautifulSoup
>>> import spacy
>>> nlp = spacy.load('en')

>>> url = "http://www.bbc.co.uk/news/uk-politics-39784164"
>>> html = requests.get(url).content
>>> bsoup = BeautifulSoup(html, 'html.parser')
>>> text = '\n'.join(p.text for d in bsoup.find_all( 'div', {'class': 'story-body__inner'}) for p in d.find_all('p') if p.text.strip())

>>> import spacy
>>> nlp = spacy.load('en')
>>> doc = nlp(text)
>>> names = [ent for ent in doc.ents if ent.ent_type_ == 'PERSON']

答案 2 :(得分:0)

我终于修复了我的代码。我很惊讶它看起来很容易但是我花了这么长时间才到达那里我看到很多人对同样的问题感到困惑所以我决定发布我的答案。

在传递名字以进一步清理之前添加这个小功能解决了我的问题。

def decode(names):        
    decodednames = []
    for name in names:
        decodednames.append(unicode(name, errors='ignore'))
    return decodednames

SpaCy仍然认为590亿英镑是PERSON,但我可以,我可以在我的代码中稍后处理。

工作代码:

import urllib
import requests
from bs4 import BeautifulSoup
import spacy
from spacy.en import English
from __future__ import unicode_literals
nlp_toolkit = English()
nlp = spacy.load('en')

def get_text(url):
    r = requests.get(url)
    soup = BeautifulSoup(r.content, "lxml")

    # delete unwanted tags:
    for s in soup(['figure', 'script', 'style']):
        s.decompose()

    # use separator to separate paragraphs and subtitles!
    article_soup = [s.get_text(separator="\n", strip=True) for s in soup.find_all( 'div', {'class': 'story-body__inner'})]

    text = ''.join(article_soup)
    return text

# using spacy
def get_names(all_tags):
    names=[]
    for ent in all_tags.ents:
        if ent.label_=="PERSON":
            names.append(str(ent))
    return names

def decode(names):        
    decodednames = []
    for name in names:
        decodednames.append(unicode(name, errors='ignore'))
    return decodednames

def cleaning_names(names):
    new_names = [s.strip("'s") for s in names] # remove 's' from names
    myset = list(set(new_names)) #remove duplicates
    return myset

def main():
    url = "http://www.bbc.co.uk/news/uk-politics-39784164"
    text=get_text(url)
    text=u"{}".format(text)
    all_tags = nlp(text)
    names = get_person(all_tags)
    print "names:"
    print names
    decodednames = decode(names)
    mynewlist = cleaning_names(decodednames)
    print mynewlist

if __name__ == '__main__':
    main()

这给了我没有错误:

  

姓名:['Nick Clegg','Brexit','\ xc2 \ xa359bn','Theresa May',   “脱欧”,“脱欧”,“克莱格先生”,“克莱格先生”,“克莱格先生”,“英国退欧”,“先生   Clegg','Theresa May'] [u'Mr Clegg',u'Brexit',u'Nick Clegg',   u'59bn',u'Theresa May']