使用R下载网页上的所有文件?

时间:2017-12-28 11:49:46

标签: r xml download

我的问题几乎与here相同。我想从this page下载所有文件。但不同之处在于我没有相同的模式可以下载所有文件。

想知道在R中下载吗?

2 个答案:

答案 0 :(得分:2)

这不是最优雅的解决方案,但是当我在helplinks的随机子集上进行尝试时,它似乎正常工作。

library(rvest)

#Grab filenames from separate URL
helplinks <- read_html("http://rdf.muninn-project.org/api/elevation/datasets/srtm/") %>% html_nodes("a") %>% html_text(trim = T)

#Keep only filenames relevant for download
helplinks <- helplinks[grepl("srtm", helplinks)]

#Download files - make sure to adjust the `destfile` argument of the download.file function.
lapply(helplinks, function(x) download.file(sprintf("http://srtm.csi.cgiar.org/SRT-ZIP/SRTM_V41/SRTM_Data_GeoTiff/%s", x), sprintf("C:/Users/aud/Desktop/%s", x)))

答案 1 :(得分:0)

# use the FTP mirror link provided on the page
mirror <- "ftp://srtm.csi.cgiar.org/SRTM_v41/SRTM_Data_GeoTIFF/"

# read the file listing
pg <- readLines(mirror)

# take a look
head(pg)
## [1] "06-18-09  06:18AM               713075 srtm_01_02.zip"
## [2] "06-18-09  06:18AM               130923 srtm_01_07.zip"
## [3] "06-18-09  06:18AM               130196 srtm_01_12.zip"
## [4] "06-18-09  06:18AM               156642 srtm_01_15.zip"
## [5] "06-18-09  06:18AM               317244 srtm_01_16.zip"
## [6] "06-18-09  06:18AM               160847 srtm_01_17.zip"

# clean it up and make them URLs
fils <- sprintf("%s%s", mirror, sub("^.*srtm", "srtm", pg))

head(fils)
## [1] "ftp://srtm.csi.cgiar.org/SRTM_v41/SRTM_Data_GeoTIFF/srtm_01_02.zip"
## [2] "ftp://srtm.csi.cgiar.org/SRTM_v41/SRTM_Data_GeoTIFF/srtm_01_07.zip"
## [3] "ftp://srtm.csi.cgiar.org/SRTM_v41/SRTM_Data_GeoTIFF/srtm_01_12.zip"
## [4] "ftp://srtm.csi.cgiar.org/SRTM_v41/SRTM_Data_GeoTIFF/srtm_01_15.zip"
## [5] "ftp://srtm.csi.cgiar.org/SRTM_v41/SRTM_Data_GeoTIFF/srtm_01_16.zip"
## [6] "ftp://srtm.csi.cgiar.org/SRTM_v41/SRTM_Data_GeoTIFF/srtm_01_17.zip"

# test download
download.file(fils[1], basename(fils[1]))

# validate it worked before slamming the server (your job)

# do the rest whilst being kind to the mirror server
for (f in fils[-1]) {
  download.file(f, basename(f))
  Sys.sleep(5) # unless you have entitlement issues, space out the downloads by a few seconds
}

如果您不介意使用非基础软件包,curl可以帮助您获取文件名,而不是执行上面的sub

unlist(strsplit(rawToChar(curl::curl_fetch_memory(mirror, curl::new_handle(dirlistonly=TRUE))$content), "\n"))