我正在尝试抓取一个页面,但我有一个UnicodeDecodeError。这是我的代码:
def soup_def(link):
req = urllib2.Request(link, headers={'User-Agent' : "Magic Browser"})
usock = urllib2.urlopen(req)
encoding = usock.headers.getparam('charset')
page = usock.read().decode(encoding)
usock.close()
soup = BeautifulSoup(page)
return soup
soup = soup_def("http://www.geekbuying.com/item/Ainol-Novo-10-Hero-II-Quad-Core--Tablet-PC-10-1-inch-IPS-1280-800-1GB-RAM-16GB-ROM-Android-4-1--HDMI-313618.html")
错误:
UnicodeDecodeError: 'utf8' codec can't decode byte 0xff in position 284: invalid start byte
我检查过几个用户有同样的错误,但我找不到任何解决方案。
答案 0 :(得分:0)
这是我从wikipedia获得的字符0xff
,这是UTF-16的符号。
UTF-16[edit]
In UTF-16, a BOM (U+FEFF) may be placed as the first character of a file or character stream to indicate the endianness (byte order) of all the 16-bit code units of the file or stream.
If the 16-bit units are represented in big-endian byte order, this BOM character will appear in the sequence of bytes as 0xFE followed by 0xFF. This sequence appears as the ISO-8859-1 characters þÿ in a text display that expects the text to be ISO-8859-1.
if the 16-bit units use little-endian order, the sequence of bytes will have 0xFF followed by 0xFE. This sequence appears as the ISO-8859-1 characters ÿþ in a text display that expects the text to be ISO-8859-1.
Programs expecting UTF-8 may show these or error indicators, depending on how they handle UTF-8 encoding errors. In all cases they will probably display the rest of the file as garbage (a UTF-16 text containing ASCII only will be fairly readable).
所以我有两个想法:
(1)可能是因为它应该被视为utf-16
而不是utf-8
(2)发生错误的原因是您试图将整个汤打印到屏幕上。然后它涉及你的IDE(Eclipse / Pycharm)是否足够聪明以显示那些unicode。
如果我是你,我会尝试继续前进而不打印整个汤,只收集你想要的那块。看到你有问题到达那一步。如果那里没有问题,那么为什么打扰你无法将整个汤打印到屏幕上。
如果您真的想要将汤打印到屏幕上,请尝试:
print soup.prettify(encoding='utf-16')