使用tryCatch和rvest来处理404和其他爬行错误

时间:2016-06-30 04:35:39

标签: r try-catch rvest

使用rvest检索h1标题时,有时会遇到404页。这将停止该过程并返回此错误。

  

open.connection错误(x," rb"):HTTP错误404.

参见下面的示例

Data<-data.frame(Pages=c(
"http://boingboing.net/2016/06/16/spam-king-sanford-wallace.html",
"http://boingboing.net/2016/06/16/omg-the-japanese-trump-commer.html",
"http://boingboing.net/2016/06/16/omar-mateen-posted-to-facebook.html",
"http://boingboing.net/2016/06/16/omar-mateen-posted-to-facdddebook.html"))

用于检索h1的代码

library (rvest)
sapply(Data$Pages, function(url){
 url %>%
 as.character() %>% 
 read_html() %>% 
 html_nodes('h1') %>% 
 html_text()
 })

有没有办法包含一个参数来忽略错误并继续这个过程?

2 个答案:

答案 0 :(得分:9)

您正在寻找trytryCatch,这是R处理错误捕获的方式。

使用try,您只需要在try()中包装可能失败的内容,它将返回错误并继续运行:

library(rvest)

sapply(Data$Pages, function(url){
  try(
    url %>%
      as.character() %>% 
      read_html() %>% 
      html_nodes('h1') %>% 
      html_text()
  )
})

# [1] "'Spam King' Sanford Wallace gets 2.5 years in prison for 27 million Facebook scam messages"
# [2] "OMG, this Japanese Trump Commercial is everything"                                         
# [3] "Omar Mateen posted to Facebook during Orlando mass shooting"                               
# [4] "Error in open.connection(x, \"rb\") : HTTP error 404.\n"  

然而,虽然这会得到一切,但它也会在我们的结果中插入不良数据。 tryCatch允许您配置在调用错误时发生的事情,方法是将该函数传递给该条件时运行的函数:

sapply(Data$Pages, function(url){
  tryCatch(
    url %>%
      as.character() %>% 
      read_html() %>% 
      html_nodes('h1') %>% 
      html_text(), 
    error = function(e){NA}    # a function that returns NA regardless of what it's passed
  )
})

# [1] "'Spam King' Sanford Wallace gets 2.5 years in prison for 27 million Facebook scam messages"
# [2] "OMG, this Japanese Trump Commercial is everything"                                         
# [3] "Omar Mateen posted to Facebook during Orlando mass shooting"                               
# [4] NA  

我们去;好多了。

更新

在tidyverse中,purrr包提供了两个函数safelypossibly,其工作方式类似于trytryCatch。它们是副词,而不是动词,意味着它们接受一个函数,修改它以便处理错误,并返回一个可以调用的新函数(不是数据对象)。例如:

library(tidyverse)
library(rvest)

df <- Data %>% rowwise() %>%     # Evaluate each row (URL) separately
    mutate(Pages = as.character(Pages),    # Convert factors to character for read_html
           title = possibly(~.x %>% read_html() %>%    # Try to take a URL, read it,
                                html_nodes('h1') %>%    # select header nodes,
                                html_text(),    # and collect text inside.
                            NA)(Pages))    # If error, return NA. Call modified function on URLs.

df %>% select(title)
## Source: local data frame [4 x 1]
## Groups: <by row>
## 
## # A tibble: 4 × 1
##                                                                                        title
##                                                                                        <chr>
## 1 'Spam King' Sanford Wallace gets 2.5 years in prison for 27 million Facebook scam messages
## 2                                          OMG, this Japanese Trump Commercial is everything
## 3                                Omar Mateen posted to Facebook during Orlando mass shooting
## 4                                                                                       <NA>

答案 1 :(得分:2)

您可以查看此问题以获得解释here

urls<-c(
    "http://boingboing.net/2016/06/16/spam-king-sanford-wallace.html",
    "http://boingboing.net/2016/06/16/omg-the-japanese-trump-commer.html",
    "http://boingboing.net/2016/06/16/omar-mateen-posted-to-facebook.html",
    "http://boingboing.net/2016/06/16/omar-mateen-posted-to-facdddebook.html")


readUrl <- function(url) {
    out <- tryCatch(
        {
            message("This is the 'try' part")
            url %>% as.character() %>% read_html() %>% html_nodes('h1') %>% html_text() 
        },
        error=function(cond) {
            message(paste("URL does not seem to exist:", url))
            message("Here's the original error message:")
            message(cond)
            return(NA)
        }
        }
    )    
    return(out)
}

y <- lapply(urls, readUrl)