使用RVest follow_link循环浏览网页列表

时间:2018-12-06 09:33:09

标签: r web-scraping lapply rvest

我正在尝试对政府发布日历https://www.gov.uk/government/statistics进行网络爬虫,并使用rvest follow_link功能转到每个发布链接并从下一页抓取文字。我可以在每一页结果中使用此功能(每页显示40个出版物),但是无法循环工作,因此我可以在列出的所有出版物上运行代码。

这是我首先运行以获取出版物列表的代码(仅从结果的前10页开始):

#Loading the rvest package
library('rvest')
library('dplyr')
library('tm')

#######PUBLISHED RELEASES################

###function to add number after 'page=' in url to loop over all pages of published releases results (only 40 publications per page)
###check the site and see how many pages you want to scrape, to cover months of interest
##titles of publications - creates a list
publishedtitles <- lapply(paste0('https://www.gov.uk/government/statistics?page=', 1:10),
                 function(url_base){
                   url_base %>% read_html() %>% 
                     html_nodes('h3 a') %>% 
                     html_text()
                 })


##Dates of publications
publisheddates <- lapply(paste0('https://www.gov.uk/government/statistics?page=', 1:10),
                 function(url_base){
                   url_base %>% read_html() %>% 
                     html_nodes('.public_timestamp') %>% 
                     html_text()
                 })
##Organisations
publishedorgs <- lapply(paste0('https://www.gov.uk/government/statistics?page=', 1:10),
                function(url_base){
                  url_base %>% read_html() %>% 
                    html_nodes('.organisations') %>% 
                    html_text()
                })
##Links to publications
publishedpartial_links <- lapply(paste0('https://www.gov.uk/government/statistics?page=', 1:10),
               function(url_base){
                 url_base %>% read_html() %>% 
                   html_nodes('h3 a') %>% 
                   html_attr('href') 
               })

#Check all lists are the same length - if not, have to deal with missings before next step
# length(publishedtitles)
# length(publisheddates)
# length(publishedorgs)
# length(publishedpartial_links)
#str(publishedorgs)

#Combining all the lists to form a data frame
published <-data.frame(Title = unlist(publishedtitles), Date = unlist(publisheddates), Organisation = unlist(publishedorgs), PartLinks = unlist(publishedpartial_links))

#adding prefix to partial links, to turn into full URLs
published$Links = paste("https://www.gov.uk", published$PartLinks, sep="")
#Drop partial links column
keeps <- c("Title", "Date", "Organisation", "Links")
published <- published[keeps]

然后,我想运行以下内容,但要覆盖所有结果页面。我已经手动运行此代码来更改每个页面的参数,因此知道它可以工作。

session1 <- html_session("https://www.gov.uk/government/statistics?page=1")
list1 <- list()
for(i in published$Title[1:40]){
  nextpage1 <- session1 %>% follow_link(i) %>% read_html()
  list1[[i]]<- nextpage1 %>%
    html_nodes(".grid-row") %>% html_text()
  df1 <- data.frame(text=list1)
  df1 <-as.data.frame(t(df1))
}

因此,以上内容需要更改html_session中的page = 1,以及出版物$ Title [1:40]-我正在努力创建一个包含两个变量的函数或循环。

我认为我应该可以使用lapply来做到这一点:

df <- lapply(paste0('https://www.gov.uk/government/statistics?page=', 1:10),
                          function(url_base){
                              for(i in published$Title[1:40]){
                                nextpage1 <- url_base %>% follow_link(i) %>% read_html()
                                list1[[i]]<- nextpage1 %>%
                                  html_nodes(".grid-row") %>% html_text()
                              }
                          }
)

但是我得到了错误

Error in follow_link(., i) : is.session(x) is not TRUE

我还尝试了其他循环并将其转换为函数的方法,但不想让这篇文章过长!

提前感谢您的任何建议和指导:)

1 个答案:

答案 0 :(得分:0)

似乎您可能只需要在lapply函数中启动会话。在最后一段代码中,url_base只是一个提供基本URL的文本字符串。这样的事情会做:

df <- lapply(paste0('https://www.gov.uk/government/statistics?page=', 1:10),
                        function(url_base){
                            for(i in published$Title[1:40]){
                                tmpSession <- html_session(url_base)
                                nextpage1 <- tmpSession %>% follow_link(i) %>% read_html()
                                list1[[i]]<- nextpage1 %>%
                                html_nodes(".grid-row") %>% html_text()
                            }
                        }
)

要为published$Title[1:40]函数的每次迭代更改lapply,您可以创建一个对象,该对象保留索引的上下限:

lowers <- cumsum(c(1, rep(40, 9)))
uppers <- cumsum(rep(40, 10))

然后,您可以将这些内容包括在对lapply的通话中

df <- lapply(1:10, function(j){
    url_base <- paste0('https://www.gov.uk/government/statistics?page=', j)
    for(i in published$Title[lowers[j]:uppers[j]]){
        tmpSession <- html_session(url_base)
        nextpage1 <- tmpSession %>% follow_link(i) %>% read_html()
        list1[[i]]<- nextpage1 %>%
        html_nodes(".grid-row") %>% html_text()
    }
}
)

不确定这是否是您想要的,我可能误会了应该改变的东西。