在R中循环URL和存储信息

时间:2016-07-28 21:01:29

标签: r for-loop web-scraping

我正在尝试编写一个for循环,它将循环遍历许多网站并提取一些元素,并将结果存储在R中的表中。到目前为止,这是我的去,只是不确定如何启动for循环,或将所有结果复制到一个变量中以便稍后导出。

library("dplyr")
library("rvest")
library("leaflet")
library("ggmap")


url <- c(html("http://www.webiste_name.com/")

agent <- html_nodes(url,"h1 span")
fnames<-html_nodes(url, "#offNumber_mainLocContent span")
address <- html_nodes(url,"#locStreetContent_mainLocContent")

scrape<-t(c(html_text(agent),html_text(fnames),html_text(address)))


View(scrape)

2 个答案:

答案 0 :(得分:2)

鉴于您的问题不能完全重现,这里有一个玩具示例,它遍历三个网址(Red Socks,Jays和Yankees):

library(rvest)

# teams
teams <- c("BOS", "TOR", "NYY")

# init
df <- NULL

# loop
for(i in teams){
    # find url
    url <- paste0("http://www.baseball-reference.com/teams/", i, "/")
    page <- read_html(url)
    # grab table
    table <- page %>%
        html_nodes(css = "#franchise_years") %>%
        html_table() %>%
        as.data.frame()
    # bind to dataframe
    df <- rbind(df, table)
}

# view captured data
View(df)

循环有效,因为它会将每个团队按顺序替换i中的paste0

答案 1 :(得分:0)

我会选择lapply

代码看起来像这样:

library("rvest")
library("dplyr")

#a vector of urls you want to scrape
URLs <- c("http://...1", "http://...2", ....)

df <- lapply(URLs, function(u){

      html.obj <- read_html(u)
      agent <- html_nodes(html.obj,"h1 span") %>% html_text
      fnames<-html_nodes(html.obj, "#offNumber_mainLocContent span") %>% html_text
      address <- html_nodes(html.obj,"#locStreetContent_mainLocContent") %>% html_text

     data.frame(Agent=agent, Fnames=fnames, Address=address)
})

df <- do.all(rbind, df)

View(df)