我从rNomads包中获取了以下代码并对其进行了一些修改。
最初运行时,我得到:
> WebCrawler(url = "www.bikeforums.net")
[1] "www.bikeforums.net"
[1] "www.bikeforums.net"
Warning message:
XML content does not seem to be XML: 'www.bikeforums.net'
以下是代码:
require("XML")
# cleaning workspace
rm(list = ls())
# This function recursively searches for links in the given url and follows every single link.
# It returns a list of the final (dead end) URLs.
# depth - How many links to return. This avoids having to recursively scan hundreds of links. Defaults to NULL, which returns everything.
WebCrawler <- function(url, depth = NULL, verbose = TRUE) {
doc <- XML::htmlParse(url)
links <- XML::xpathSApply(doc, "//a/@href")
XML::free(doc)
if(is.null(links)) {
if(verbose) {
print(url)
}
return(url)
} else {
urls.out <- vector("list", length = length(links))
for(link in links) {
if(!is.null(depth)) {
if(length(unlist(urls.out)) >= depth) {
break
}
}
urls.out[[link]] <- WebCrawler(link, depth = depth, verbose = verbose)
}
return(urls.out)
}
}
# Execution
WebCrawler(url = "www.bikeforums.net")
任何建议我做错了什么?
更新
大家好,
我开始这个赏金,因为我认为在R社区中需要这样一个可以抓取网页的功能。赢得奖金的解决方案应该显示一个带有两个参数的函数:
WebCrawler(url = "www.bikeforums.net", xpath = "\\title" )
我非常感谢您的回复
答案 0 :(得分:2)
在您的函数中的links <- XML::xpathSApply(doc, "//a/@href")
下插入以下代码。
links <- XML::xpathSApply(doc, "//a/@href")
links1 <- links[grepl("http", links)] # As @Floo0 pointed out this is to capture non relative links
links2 <- paste0(url, links[!grepl("http", links)]) # and to capture relative links
links <- c(links1, links2)
还要记得url
为http://www......
此外,您还没有更新urls.out
列表。正如你所拥有的那样,它总是一个长度与links
的长度相同的空列表