BeautifulSoup刮擦突然停止工作

时间:2017-07-03 23:34:11

标签: python html python-2.7 web-scraping beautifulsoup

我正试图从NASA网站上搜集一些div并将所有内容放入列表中。 此代码工作较早,突然决定不这样做。除了添加一些打印语句之外,我没有故意改变任何内容,所有这些都没有返回任何内容或[]没有任何错误。

import re
import urllib2 
from BeautifulSoup import BeautifulSoup as soup
import ssl


url = "https://climate.nasa.gov"
context = ssl._create_unverified_context()
gcontext = ssl.SSLContext(ssl.PROTOCOL_TLSv1)
web_soup = soup(urllib2.urlopen(url,  context=context))

l = []

# get main-content div
main_div = web_soup.findAll(name="div", attrs={'class': 'change_number'})
print main_div
for element in main_div:
    print element
    l.append(float(str(element)[27:-7]))

print l

任何有关查明这一突发错误的帮助都将不胜感激!

UPDATE1 :刚刚在解释器中尝试过,以同样的方式失败。 main_div似乎正在返回[]

UPDATE2 :刚查看网站以确保div change_number存在。确实如此。

UPDATE3 :现在我真的很困惑。我有这个代码,我很确定它基本上与上面的代码相同:

import re
import urllib2 
from BeautifulSoup import BeautifulSoup as soup
import ssl


url = "https://climate.nasa.gov"
context = ssl._create_unverified_context()
gcontext = ssl.SSLContext(ssl.PROTOCOL_TLSv1)
web_soup = soup(urllib2.urlopen(url,  context=context))

l = []

# get main-content div
main_div = web_soup.findAll(name="div", attrs={'class': 'change_number'})
for element in main_div:
    print element
    l.append(float(str(element)[27:-7]))

print l

但它在websoup定义行上抛出UnicodeEncodeError

Traceback (most recent call last):
  File "climate_nasa_change.py", line 10, in <module>
    web_soup = soup(urllib2.urlopen(url,  context=context))
  File "C:\Python27\lib\site-packages\BeautifulSoup.py", line 1522, in __init__
    BeautifulStoneSoup.__init__(self, *args, **kwargs)
  File "C:\Python27\lib\site-packages\BeautifulSoup.py", line 1147, in __init__
    self._feed(isHTML=isHTML)
  File "C:\Python27\lib\site-packages\BeautifulSoup.py", line 1189, in _feed
    SGMLParser.feed(self, markup)
  File "C:\Python27\lib\sgmllib.py", line 104, in feed
    self.goahead(0)
  File "C:\Python27\lib\sgmllib.py", line 143, in goahead
    k = self.parse_endtag(i)
  File "C:\Python27\lib\sgmllib.py", line 320, in parse_endtag
    self.finish_endtag(tag)
  File "C:\Python27\lib\sgmllib.py", line 358, in finish_endtag
    method = getattr(self, 'end_' + tag)
UnicodeEncodeError: 'ascii' codec can't encode characters in position 11-12: ordinal not in range(128)

UPDATE4 :aaaaaaaaaaaa现在它神奇地工作了。我真的不知道现在到底发生了什么。

UPDATE5 :现在它坏了。我发誓我不会改变。认真考虑驱逐我的电脑。请帮忙。

UPDATE6 :刚试过pinging climate.nasa.gov。尽管我的浏览器中的页面一直在加载,但它并不总是能够完成。这会导致BeautifulSoup失败吗?

1 个答案:

答案 0 :(得分:1)

问题是该网站有时会返回gzip编码的响应,有时会返回明文 如果您使用requests,则可以轻松解决此问题,因为它会自动解码内容:

web_soup = soup(requests.get(url, verify=False).text)  

请注意,requests不是标准库,您必须安装它 如果您坚持使用urllib2,则可以使用zlib对响应进行解码(如果已编码):

decode = lambda response : zlib.decompress(response, 16 + zlib.MAX_WBITS) 
response = urllib2.urlopen(url,  context=context)
headers = response.info()
html = decode(response.read()) if headers.get('content-encoding') == 'gzip' else response
web_soup = soup(html)