我正在获取网页的源代码,编码为cp1252。 Chrome正确显示页面。
这是我的代码:
import sys
from urllib.request import urlopen
from bs4 import BeautifulSoup, UnicodeDammit
import re
import codecs
url = "http://www.sec.gov/Archives/edgar/data/1400810/000119312513211026/d515005d10q.htm"
page = urlopen(url).read()
print(page)
# A little preview :
# b'...Regulation S-T (§232.405 of this chapter) during the preceding 12 months (or for such shorter period that the\nregistrant was required to submit and post such files). Yes <FONT STYLE="FONT-FAMILY:WINGDINGS">x</FONT>...'
soup = BeautifulSoup(page, from_encoding="cp1252")
print(str(soup).encode('utf-8'))
# Same preview section as above
# b'...Regulation S-T (\xc2\xa7232.405 of this chapter) during the preceding 12 months (or for such shorter period that the\nregistrant was required to submit and post such files).\xc2\xa0\xc2\xa0\xc2\xa0\xc2\xa0Yes\xc2\xa0\xc2\xa0<font style="FONT-FAMILY:WINGDINGS">x</font>'
从预览部分,我们可以看到
&安培; NBSP \; = \ xc2 \ xa0
&安培;#167; = \ xc2 \ xa7
&安培;#120; = x
对于cp1252编码标准,我指的是 http://en.wikipedia.org/wiki/Windows-1252#Code_page_layout 和/Lib/encodings/cp1252.py
当我使用BeautifulSoup(page,from_encoding =“cp1252”)时,某些字符被正确编码,但其他字符则没有。
字符|十进制编码| cp1252-&gt; utf-8编码
“| &安培;#147; | \ xc2 \ x93(错误)
“| &安培;#148; | \ xc2 \ x94(错误)
X | &安培;#120; | \ xc2 \ x92(错误)
§| &安培;#167; | \ xc2 \ xa7(ok)
þ| &安培;#254;
¨| &安培;#168;
'| &安培;#146; | \ xc2 \ x92(错误)
- | &安培;#150;
我使用此代码来获得等价:
characters = "’ “ ” X § þ ¨ ' –"
list = characters.split()
for ch in list:
print(ch)
cp1252 = ch.encode('cp1252')
print(cp1252)
decimal = cp1252[0]
special = "&#" + str(decimal)
print(special)
print(ch.encode('utf-8'))
print()
offenders = [120, 146]
for n in offenders:
toHex = hex(n)
print(toHex)
print()
#120
off = b'\x78'
print(off)
buff = off.decode('cp1252')
print(buff)
uni = buff.encode('utf-8')
print(uni)
print()
#146
off = b'\x92'
print(off)
buff = off.decode('cp1252')
print(buff)
uni = buff.encode('utf-8')
print(uni)
print()
输出
’
b'\x92'
’
b'\xe2\x80\x99'
“
b'\x93'
“
b'\xe2\x80\x9c'
”
b'\x94'
”
b'\xe2\x80\x9d'
X
b'X'
X
b'X'
§
b'\xa7'
§
b'\xc2\xa7'
þ
b'\xfe'
þ
b'\xc3\xbe'
¨
b'\xa8'
¨
b'\xc2\xa8'
'
b"'"
'
b"'"
–
b'\x96'
–
b'\xe2\x80\x93'
0x78
0x92
b'x'
x
b'x'
b'\x92'
’
b'\xe2\x80\x99'
有些角色未能将复制粘贴到编辑器中,就像奇怪的X和怪异的',所以我添加了一些代码来处理它。
我可以对get \ xe2 \ x80 \ x9d而不是\ xc2 \ x94执行“(&amp;#148;)?
我的设置:
Windows 7
终端:chcp 1252 + Lucida Console字体
Python 3.3
BeautifulSoup 4
期待您的回答
答案 0 :(得分:1)
HTML中的数字字符引用是指Unicode代码点,即它不依赖于文档的字符编码,例如,”
是U+0094 CANCEL CHARACTER*。
b"\xe2\x80\x9d"
字节为U+201D RIGHT DOUBLE QUOTATION MARK:
u'\u201d'.encode('utf-8') == b'\xe2\x80\x9d'
u'\u201d'.encode('cp1252') == b'\x94'
u'\u201d'.encode('ascii', 'xmlcharrefreplace') == b'”'
要修复代码,请删除不必要的位:
from urllib.request import urlopen
from bs4 import BeautifulSoup
url = "http://www.sec.gov/path/to.htm"
soup = BeautifulSoup(urlopen(url))
print(soup)
如果失败;尝试sys.stdout.buffer.write(soup.encode('cp1252'))
或将PYTHONIOENCODING
环境变量设置为cp1252:xmlcharrefreplace
。
答案 1 :(得分:1)
这就是我最终使用的
def reformatCp1252(match):
codePoint = int(match.group(1))
if 128 <= codePoint <= 159:
return bytes([codePoint])
else:
return match.group()
localPage = urlopen(r_url).read()
formatedPage = re.sub(b'&#(\d+);', reformatCp1252, localPage, flags=re.I)
localSoup = BeautifulSoup(formatedPage, "lxml", from_encoding="windows-1252")
注意:我在windows7中使用bs4和python3.3
我发现from_encoding到BeautifulSoup真的无关紧要,你可以把utf-8或windows-1252给它一个完整的utf-8编码代替windows-1252编码到utf-8。
基本上所有的代码点都被解释为utf-8和单字节\ x?被解释为windows-1252。
据我所知,windows-1252中只有128到159的字符与utf-8字符不同。
例如,混合编码(windows-1252:\ x93和\ x94与utf-8:&amp;#376;)将仅在utf-8中输出转换。
byteStream = b'\x93Hello\x94 (\xa7232.405 of this chapter) Ÿ \x87'
# with code above
print(localSoup.encode('utf-8'))
# and you can see that \x93 was transformed to its utf-8 equivalent.
答案 2 :(得分:0)
美丽的汤正在解释实体中的代码点,即“
中的数字作为Unicode代码点,而不是CP-1252
代码点。从BeautifulSoup 4的文档和来源,目前尚不清楚是否有办法改变HTML实体的这种解释。 (EntitySubstitution
类看起来很有希望,但没有用于自定义它的钩子。)
以下解决方案是hackey,只能假设所有非ASCII(即上面的代码点127)字符都以相同的方式被误解(如果有原始{{1}则不会出现这种情况。原始中的字符,BeautifulSoup将正确解释;此解决方案将破坏这些字符。)
假设你有来自Beautiful Soup转换的文本(HTML代码被解释为Unicode代码点):
CP-1252
以下内容会将代码重新解释为soup = BeautifulSoup(page, from_encoding="cp1252")
txt = str(soup)
:
CP-1252
此解决方案未针对性能进行优化,但我认为对于此特定情况,它可能足够。
我从您在示例中提供的URL的“固定”文本中提取了127以上的所有代码点。这就是我得到的(似乎涵盖了你感兴趣的角色):
def reinterpret_codepoints(chars, encoding='cp1252'):
'''Converts code-points above 127 in the text to the given
encoding (assuming that all code-points above 127 represent
code-points in the given encoding)
'''
for char, code in zip(chars, map(ord, txt)):
if code < 127:
yield char
else:
yield bytes((code,)).decode(encoding)
fixed_text = ''.join(reinterpret_codepoints(txt))