存储从R中的单个变量中的多个页面检索的链接

时间:2018-06-08 12:44:00

标签: r web-scraping

我想从具有多个页面的URL中提取链接,并将所有提取的链接存储在单个变量中。现在,我有大部分工作代码,但是,我创建的变量只存储从最后一页检索到的信息。如何调整代码以存储从所有页面检索到的链接?

这是我的代码:

page_links <- paste0('https://cryptoslate.com/ico-database/recent-icos/page/', 1:7)
# Retrieving links to all pages I want to retrieve links from 

ICO_links_CS <- c()
# Creating an empty variable for links to be stored in 

# loop through each page, extract all links and select the ones I want to 
retrieve, then doing some cleaning up  
for (i in length(page_links)) {
  ICO_links_CS <- c(ICO_links_CS, deduped.data_Cryptoslate)
  page_links_1 <- read_html(page_links[i])
  Extractlinks <- html_attr(html_nodes(page_links_1, "a"), "href")
  ICO_links_Cryptoslate <- str_subset(Extractlinks, 
"https://cryptoslate.com/coins/")
  deduped.data_Cryptoslate <- unique(ICO_links_Cryptoslate)
}

1 个答案:

答案 0 :(得分:0)

主要问题是您实际上需要for (i in 1:length(page_links)),而不是for (i in length(page_links))。此外,您可能希望将ICO_links_CS初始化为列表,并将每个页面的结果存储为列表中的元素,而不是将每个页面的结果附加到ICO_links_CS。例如,

library(rvest)
library(stringr)

# Retrieving links to all pages I want to retrieve links from 
page_links <- paste0('https://cryptoslate.com/ico-database/recent-icos/page/', 1:7)

# Creating an empty variable for links to be stored in 
ICO_links_CS <- vector(length = length(page_links), mode = "list")

# loop through each page, extract all links and select the ones I want to retrieve, then doing some cleaning up  
for (i in 1:length(page_links)) {
  page_links_1 <- read_html(page_links[i])
  Extractlinks <- html_attr(html_nodes(page_links_1, "a"), "href")
  ICO_links_Cryptoslate <- str_subset(Extractlinks, "https://cryptoslate.com/coins/")
  ICO_links_CS[[i]] <- unique(ICO_links_Cryptoslate)
}

最后,

ICO_links_CS <- unlist(ICO_links_CS)
str(ICO_links_CS)
# chr [1:661] "https://cryptoslate.com/coins/"