如果在美丽的汤中的python响应中发生错误,如何继续前进

时间:2016-01-17 10:33:19

标签: python web-crawler

我制作了一个网络抓取工具,它从文本文件中获取数千个Url,然后抓取该网页上的数据。
现在它有很多网址;一些网址也被打破了。
所以它给了我错误:

Traceback (most recent call last):  
File "C:/Users/khize_000/PycharmProjects/untitled3/new.py", line 57, in <module> 

crawl_data("http://www.foasdasdasdasdodily.com/r/126e7649cc-sweetssssie-pies-mac-and-cheese-recipe-by-the-dr-oz-show")  

  File "C:/Users/khize_000/PycharmProjects/untitled3/new.py", line 18, in crawl_data   

 data = requests.get(url)   

File "C:\Python27\lib\site-packages\requests\api.py", line 67, in get   
return request('get', url, params=params, **kwargs)   

File "C:\Python27\lib\site-packages\requests\api.py", line 53, in request   
return session.request(method=method, url=url, **kwargs) 

File "C:\Python27\lib\site-packages\requests\sessions.py", line 468, in request  
 resp = self.send(prep, **send_kwargs)  

File "C:\Python27\lib\site-packages\requests\sessions.py", line 576, in send  
r = adapter.send(request, **kwargs)  

File "C:\Python27\lib\site-packages\requests\adapters.py", line 437, in send  
  raise ConnectionError(e, request=request)  

requests.exceptions.ConnectionError: HTTPConnectionPool(host='www.foasdasdasdasdodily.com', port=80): Max retries exceeded with url: /r/126e7649cc-sweetssssie-pies-mac-and-cheese-recipe-by-the-dr-oz-show (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x0310FCB0>: Failed to establish a new connection: [Errno 11001] getaddrinfo failed',))

这是我的代码:

def crawl_data(url):
    global connectString
    data = requests.get(url)
    response = str( data )
    if response != "<Response [200]>":
        return
    soup = BeautifulSoup(data.text,"lxml")
    titledb = soup.h1.string

但它仍然给我相同的异常或错误。

  

我只是想让它忽略那些没有响应的Urls   然后转到下一个网址。

2 个答案:

答案 0 :(得分:3)

您需要了解异常处理。忽略这些错误的最简单方法是围绕使用try-except构造处理单个URL的代码,使代码读取如下内容:

try:
    <process a single URL>
except requests.exceptions.ConnectionError:
    pass

这意味着如果发生指定的异常,您的程序将只执行pass(不执行任何操作)语句并转到下一个

答案 1 :(得分:2)

使用try-except

def crawl_data(url):
    global connectString
    try:
        data = requests.get(url)
    except requests.exceptions.ConnectionError:
        return

    response = str( data )
    soup = BeautifulSoup(data.text,"lxml")
    titledb = soup.h1.string