网络抓取r(带循环)

时间:2017-03-16 00:12:22

标签: r loops web screen-scraping

我需要从this link抓取数据并在csv中保存表格。我现在拥有的: 我可以使用rvest在第一页,第二页上废弃并使用以下代码保存这些表:

library(rvest)
webpage <- read_html("https://bra.areacodebase.com/number_type/M?page=0")
data <- webpage %>%
  html_nodes("table") %>%
  .[[1]] %>% 
  html_table()
 url<- "https://bra.areacodebase.com/number_type/M?page=0"
webpage2<- html_session(url) %>% follow_link(css = ".pager-next a")
data2 <- webpage %>%
 html_nodes("table") %>%
 .[[1]] %>%
  html_table()
data_all <- rbind(data, data2)
write.table(data_all, "df_data.csv", sep = ";", na = "", quote = FALSE, row.names = FALSE)

#result<- lapply(webpage, %>% follow_link(css = ".pager-next a"))
#data_all <- rbind(data:data2)

但是,我无法弄清楚如何运行循环。

1 个答案:

答案 0 :(得分:4)

您可以使用follow_link转到下一个链接,也可以直接通过网址获取该页面:

webpage <- "https://bra.areacodebase.com/number_type/M?page=0"

for(i in 2:5089) {
  data <- read_html(webpage) %>%
    html_nodes("table") %>%
    .[[1]] %>% 
    html_table()

  webpage <- html_session(webpage) %>% follow_link(css = ".pager-next a") %>% .[["url"]]
}

或者,使用直接网址:

for(i in 0:5089) {
  webpage <- read_html(paste0("https://bra.areacodebase.com/number_type/M?page=", i))
  data <- webpage %>%
    html_nodes("table") %>%
    .[[1]] %>% 
    html_table()
}