处理报纸中的文章例外

时间:2017-07-19 13:53:53

标签: python web-scraping nlp python-newspaper

我有一些代码,使用报纸来看看各种媒体和从他们下载文章。这已经很好地工作了很长时间,但最近开始表现。我可以看到问题是什么,但由于我是Python的新手,我不确定解决它的最佳方法。基本上(我认为)我需要进行修改,以防止偶尔出现格式错误的网址完全崩溃脚本,而是允许它免除该网址并转移到其他网址。

错误的来源是我尝试使用以下内容下载文章时

article.download()

有些文章(它们显然每天都在变化)会抛出以下错误,但脚本会继续运行:

    Traceback (most recent call last):
      File "C:\Anaconda3\lib\encodings\idna.py", line 167, in encode
        raise UnicodeError("label too long")
   UnicodeError: label too long

   The above exception was the direct cause of the following exception:

   Traceback (most recent call last):
     File "C:\Anaconda3\lib\site-packages\newspaper\mthreading.py", line 38, in run
       func(*args, **kargs)
     File "C:\Anaconda3\lib\site-packages\newspaper\source.py", line 350, in download_articles
       html = network.get_html(url, config=self.config)
     File "C:\Anaconda3\lib\site-packages\newspaper\network.py", line 39, in get_html return get_html_2XX_only(url, config, response)
     File "C:\Anaconda3\lib\site-packages\newspaper\network.py", line 60, in get_html_2XX_only url=url, **get_request_kwargs(timeout, useragent))
     File "C:\Anaconda3\lib\site-packages\requests\api.py", line 72, in get return request('get', url, params=params, **kwargs)
     File "C:\Anaconda3\lib\site-packages\requests\api.py", line 58, in request return session.request(method=method, url=url, **kwargs)
     File "C:\Anaconda3\lib\site-packages\requests\sessions.py", line 502, in request resp = self.send(prep, **send_kwargs)
     File "C:\Anaconda3\lib\site-packages\requests\sessions.py", line 612, in send r = adapter.send(request, **kwargs)
     File "C:\Anaconda3\lib\site-packages\requests\adapters.py", line 440, in send timeout=timeout
     File "C:\Anaconda3\lib\site-packages\urllib3\connectionpool.py", line 600, in urlopen chunked=chunked)
     File "C:\Anaconda3\lib\site-packages\urllib3\connectionpool.py", line 356, in _make_request conn.request(method, url, **httplib_request_kw)
     File "C:\Anaconda3\lib\http\client.py", line 1107, in request self._send_request(method, url, body, headers)
     File "C:\Anaconda3\lib\http\client.py", line 1152, in _send_request self.endheaders(body)
     File "C:\Anaconda3\lib\http\client.py", line 1103, in endheaders     self._send_output(message_body)
     File "C:\Anaconda3\lib\http\client.py", line 934, in _send_output self.send(msg)
     File "C:\Anaconda3\lib\http\client.py", line 877, in send     self.connect()
     File "C:\Anaconda3\lib\site-packages\urllib3\connection.py", line 166, in connect conn = self._new_conn()
     File "C:\Anaconda3\lib\site-packages\urllib3\connection.py", line 141, in _new_conn  (self.host, self.port), self.timeout, **extra_kw)
     File "C:\Anaconda3\lib\site-packages\urllib3\util\connection.py", line 60, in create_connection for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM):
     File "C:\Anaconda3\lib\socket.py", line 733, in getaddrinfo for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
 UnicodeError: encoding with 'idna' codec failed (UnicodeError: label too long)

接下来应该解析并运行每篇文章的自然语言处理,并将某些元素写入数据帧,然后我有:

for paper in papers:    
for article in paper.articles:
    article.parse()
    print(article.title)
    article.nlp()
    if article.publish_date is None:
        d = datetime.now().date()
    else:
        d = article.publish_date.date()
    stories.loc[i] = [paper.brand, d, datetime.now().date(), article.title, article.summary, article.keywords, article.url]
    i += 1

(这可能有点草率,但这是另一天的问题)

这样运行正常,直到其中一个URL出现错误,然后抛出文章异常并且脚本崩溃:

    C:\Anaconda3\lib\site-packages\PIL\TiffImagePlugin.py:709: UserWarning: Corrupt EXIF data.  Expecting to read 2 bytes but only got 0.
   warnings.warn(str(msg))

   ArticleException                          Traceback (most recent call last) <ipython-input-17-2106485c4bbb> in <module>()
          4 for paper in papers:
          5     for article in paper.articles:
    ----> 6         article.parse()
          7         print(article.title)
          8         article.nlp()

   C:\Anaconda3\lib\site-packages\newspaper\article.py in parse(self)
       183 
       184     def parse(self):
   --> 185         self.throw_if_not_downloaded_verbose()
       186 
       187         self.doc = self.config.get_parser().fromstring(self.html)

   C:\Anaconda3\lib\site-packages\newspaper\article.py in throw_if_not_downloaded_verbose(self)
       519         if self.download_state == ArticleDownloadState.NOT_STARTED:
       520             print('You must `download()` an article first!')
   --> 521             raise ArticleException()
       522         elif self.download_state == ArticleDownloadState.FAILED_RESPONSE:
       523             print('Article `download()` failed with %s on URL %s' %

  ArticleException: 

那么阻止它终止我的脚本的最佳方法是什么?我应该在我收到unicode错误的下载阶段或在解析阶段通过告诉它忽略那些错误的地址来解决它吗?我将如何实施该修正?

真的很感激任何建议。

3 个答案:

答案 0 :(得分:1)

我遇到了同样的问题,虽然一般情况下使用除了:传递不推荐,但以下情况对我有用:

    try:
        a.parse()
        file.write( a.title+'\n')
    except :
        pass

答案 1 :(得分:0)

我发现Navid对这个确切的问题是正确的。

但是.parse()只是会使您绊倒的功能之一。我将所有调用包装在try / accept结构中,如下所示:

word_list = []

for words in google_news.articles:

try:
    words.download()
    words.parse()
    words.nlp()

except:
    pass

word_list.append(words.keywords)

答案 2 :(得分:0)

您可以尝试捕获ArticleException。不要忘记import报纸模块。

try:
  article.download()
  article.parse()
except newspaper.article.ArticleException:
  # do something