我开始尝试使用R测试网页加载时间。我已经设计了一个小的R代码来实现这一点:
page.load.time <- function(theURL, N = 10, wait_time = 0.05)
{
require(RCurl)
require(XML)
TIME <- numeric(N)
for(i in seq_len(N))
{
Sys.sleep(wait_time)
TIME[i] <- system.time(webpage <- getURL(theURL, header=FALSE,
verbose=TRUE) )[3]
}
return(TIME)
}
并欢迎您以多种方式提供帮助:
curlPerform出错(curl = curl, .opts = opts,.encoding = .encoding): 从接收数据时失败 对等时间停在:0.03 0 43.72
有关导致此问题的原因以及如何捕获此类错误并将其丢弃的任何建议?
您能想出改善上述功能的方法吗?
更新:我重新启动了该功能。现在痛苦地缓慢......
one.page.load.time <- function(theURL, HTML = T, JavaScript = T, Images = T, CSS = T)
{
require(RCurl)
require(XML)
TIME <- NULL
if(HTML) TIME["HTML"] <- system.time(doc <- htmlParse(theURL))[3]
if(JavaScript) {
theJS <- xpathSApply(doc, "//script/@src") # find all JavaScript files
TIME["JavaScript"] <- system.time(getBinaryURL(theJS))[3]
} else ( TIME["JavaScript"] <- NA)
if(Images) {
theIMG <- xpathSApply(doc, "//img/@src") # find all image files
TIME["Images"] <- system.time(getBinaryURL(theIMG))[3]
} else ( TIME["Images"] <- NA)
if(CSS) {
theCSS <- xpathSApply(doc, "//link/@href") # find all "link" types
ss_CSS <- str_detect(tolower(theCSS), ".css") # find the CSS in them
theCSS <- theCSS[ss_CSS]
TIME["CSS"] <- system.time(getBinaryURL(theCSS))[3]
} else ( TIME["CSS"] <- NA)
return(TIME)
}
page.load.time <- function(theURL, N = 3, wait_time = 0.05,...)
{
require(RCurl)
require(XML)
TIME <- vector(length = N, "list")
for(i in seq_len(N))
{
Sys.sleep(wait_time)
TIME[[i]] <- one.page.load.time(theURL,...)
}
require(plyr)
TIME <- data.frame(URL = theURL, ldply(TIME, function(x) {x}))
return(TIME)
}
a <- page.load.time("http://www.r-bloggers.com/", 2)
a
答案 0 :(得分:2)
您的getURL调用只会执行一个请求并获取该网页的源HTML。它不会获得CSS或Javascript或其他元素。如果你的意思是网页的“部分”,那么你将不得不为这些部分(在SCRIPT标签或css引用等中)抓取源HTML,并在时间上单独获取它们。
答案 1 :(得分:1)
也许来自Omegahat的Spidermonkey可以工作。 http://www.omegahat.org/SpiderMonkey/