rvest HTML表格抓取技术返回空列表

时间:2016-05-12 10:44:30

标签: html r web-scraping rvest

在从html表格中抓取数据时,rvest取得了成功,但对于此特定网站http://www.sanzarrugby.com/superrugby/competition-stats/2016-team-ranking/,当我运行代码时

url <- "http://www.sanzarrugby.com/superrugby/competition-stats/2016-team-ranking/"
rankings <- url %>%
read_html %>%
html_nodes("table") %>%
html_table()

返回的所有内容都是空列表。什么可能是错的?

1 个答案:

答案 0 :(得分:3)

&#34;问题&#34;使用此站点是它动态加载一个javascript文件,然后通过回调机制执行该文件以创建JS数据,然后从中构建表/ vis。

获取数据的一种方法是[R] Selenium,但这对许多人来说都是个问题。

另一种方法是使用浏览器的开发人员工具查看JS请求,运行&#34;复制为cURL&#34; (通常右键单击)然后使用一些R-fu来获得所需的东西。由于这将返回javascript,我们需要在最终转换JSON之前进行一些修改。

library(jsonlite)
library(curlconverter)
library(httr)

# this is the `Copy as cURL` result, but you can leave it in your clipboard 
# and not do this in production. Read the `curlconverter` help for more info

CURL <- "curl 'http://omo.akamai.opta.net/competition.php?feed_type=ru3&competition=205&season_id=2016&user=USERNAME&psw=PASSWORD&jsoncallback=RU3_205_2016' -H 'DNT: 1' -H 'Accept-Encoding: gzip, deflate, sdch' -H 'Accept-Language: en-US,en;q=0.8' -H 'User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.2661.94 Safari/537.36 Vivaldi/1.1.453.54' -H 'Accept: */*' -H 'Referer: http://www.sanzarrugby.com/superrugby/competition-stats/2016-team-ranking/' -H 'Connection: keep-alive' -H 'If-Modified-Since: Wed, 11 May 2016 14:47:09 GMT' -H 'Cache-Control: max-age=0' --compressed"

req <- make_req(straighten(CURL))[[1]]
req

# that makes:

# httr::VERB(verb = "GET", url = "http://omo.akamai.opta.net/competition.php?feed_type=ru3&competition=205&season_id=2016&user=USERNAME&psw=PASSWORD&jsoncallback=RU3_205_2016", 
#     httr::add_headers(DNT = "1", `Accept-Encoding` = "gzip, deflate, sdch", 
#         `Accept-Language` = "en-US,en;q=0.8", `User-Agent` = "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.2661.94 Safari/537.36 Vivaldi/1.1.453.54", 
#         Accept = "*/*", Referer = "http://www.sanzarrugby.com/superrugby/competition-stats/2016-team-ranking/", 
#         Connection = "keep-alive", `If-Modified-Since` = "Wed, 11 May 2016 14:47:09 GMT", 
#         `Cache-Control` = "max-age=0"))

# which we can transform into the following after experimenting

URL <- "http://omo.akamai.opta.net/competition.php?feed_type=ru3&competition=205&season_id=2016&user=USERNAME&psw=PASSWORD&jsoncallback=RU3_205_2016"

pg <- GET(URL,
          add_headers(
            `User-Agent` = "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.2661.94 Safari/537.36 Vivaldi/1.1.453.54", 
            Referer = "http://www.sanzarrugby.com/superrugby/competition-stats/2016-team-ranking/"))

# now all we need to do is remove the callback

dat_from_json <- fromJSON(gsub(")$", "", gsub("^RU3_205_2016\\(", "", content(pg, as="text"))), flatten=FALSE)


# we can also try removing the JSON callback, but it will return XML instead of JSON,
# which is fine since we can parse that easily

URL <- "http://omo.akamai.opta.net/competition.php?feed_type=ru3&competition=205&season_id=2016&user=USERNAME&psw=PASSWORD"

pg <- GET(URL,
          add_headers(
            `User-Agent` = "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.2661.94 Safari/537.36 Vivaldi/1.1.453.54", 
            Referer = "http://www.sanzarrugby.com/superrugby/competition-stats/2016-team-ranking/"))

xml_doc <- content(pg, as="parsed", encoding="UTF-8")

# but then you have to transform the XML, which I'll leave as an exercise to the OP :-)