这是我的代码:
#!C:/Python27/python
# -*- coding: utf-8 -*-
import requests
from bs4 import BeautifulSoup
import urllib2
import sys
import urlparse
import io
url = "http://www.dlib.org/dlib/november14/beel/11beel.html"
#url = "http://eqa.unibo.it/article/view/4554"
#r = requests.get(url)
html = urllib2.urlopen(url)
soup = BeautifulSoup(html, "html.parser")
#soup = BeautifulSoup(r.text,'lxml')
if url.find("http://www.dlib.org") != -1:
div = soup.find('td', valign='top')
else:
div = soup.find('div',id='content')
f = open('path/file_name.html', 'w')
f.write(str(div))
f.close()
抓取这些网页我已经在从这个脚本编写的html文件中发现了一些非ASCII字符,我需要删除或解析为可读字符。 有什么建议?感谢
答案 0 :(得分:3)
尝试规范化字符串,然后ASCII
对其进行编码,忽略错误。
# -*- coding: utf-8 -*-
from unicodedata import normalize
string = 'úäô§'
if isinstance(string, str):
string = string.decode('utf-8')
print normalize('NFKD', string).encode('ASCII', 'ignore')
>>> uao
答案 1 :(得分:2)
字符为8字节(0-255),ascii字符为7字节(0-127),因此您可以简单地删除ord值低于128的所有字符
chr将整数转换为字符,ord将字符转换为整数。
text = ''.join((c for c in str(div) if ord(c) < 128)
这应该是你的最终代码
#!C:/Python27/python
# -*- coding: utf-8 -*-
import requests
from bs4 import BeautifulSoup
import urllib2
import sys
import urlparse
import io
url = "http://www.dlib.org/dlib/november14/beel/11beel.html"
#url = "http://eqa.unibo.it/article/view/4554"
#r = requests.get(url)
html = urllib2.urlopen(url)
soup = BeautifulSoup(html, "html.parser")
#soup = BeautifulSoup(r.text,'lxml')
if url.find("http://www.dlib.org") != -1:
div = soup.find('td', valign='top')
else:
div = soup.find('div',id='content')
f = open('path/file_name.html', 'w')
text = ''.join((c for c in str(div) if ord(c) < 128)
f.write(text)
f.close()
答案 2 :(得分:-1)
从文字中删除非ASCII
个字符。
import string
text = [word for word in text if word not in string.ascii_letters]