readHTMLTable和rvest不适用于HTML表格抓取

时间:2016-10-19 18:52:41

标签: r web-scraping html-parsing rvest

我一直在尝试从HTML表中删除有问题的数据。

url <- "http://www.njweather.org/data/daily"
Precip <- url %>%
    html() %>%
    html_nodes(xpath='//*[@id="dataout"]') %>%
    html_table()

返回:

Warning message:
    'html' is deprecated. Use 'read_html' instead. See help("Deprecated")

和一个没有值的Precip List。

我还尝试使用readHTMLTable()函数:

readHTMLTable("http://www.njweather.org/data/daily", header = TRUE, stringAsFactors = FALSE)

这将返回另一个空列表。

2 个答案:

答案 0 :(得分:3)

不幸的是,&#34;保存到CSV&#34;是一个冲击波/闪存控件只是从页面中提取JSON内容,所以没有办法直接调用它(通过URL)但它可以在Firefox RSelenium Web驱动器上下文中点击(但是......呃!)

我可以建议一些$ demo ping google.com PING google.com (216.58.195.238): 56 data bytes Request timeout for icmp_seq 0 64 bytes from 216.58.195.238: icmp_seq=0 ttl=53 time=1064.747 ms ^C $ 节点内容手术,然后使用gsub()来评估内容,而不是使用RSelenium或更新的webdriver包:

V8

另外,请阅读library(dplyr) library(rvest) library(readr) library(V8) ctx <- v8() pg <- read_html("http://www.njweather.org/data/daily") html_nodes(pg, xpath=".//script[contains(., '#dtable')]") %>% html_text() %>% gsub("^.*var json", "var json", .) %>% gsub("var dTable.*", "", .) %>% JS() %>% ctx$eval() ctx$get("json")$aaData %>% type_convert() %>% glimpse() ## Observations: 66 ## Variables: 16 ## $ city <chr> "Berkeley Twp.", "High Point Monument", "Pequest", "Haworth", "Sicklerville", "Howell"... ## $ state <chr> "NJ", "NJ", "NJ", "NJ", "NJ", "NJ", "NJ", "NJ", "NJ", "NJ", "NJ", "NJ", "NJ", "NJ", "N... ## $ date <date> 2016-10-19, 2016-10-19, 2016-10-19, 2016-10-19, 2016-10-19, 2016-10-19, 2016-10-19, 2... ## $ source <chr> "Mesonet", "SafetyNet", "Mesonet", "Mesonet", "Mesonet", "Mesonet", "Mesonet", "Mesone... ## $ DT_RowId <int> 1032, 1030, 1029, 1033, 1034, 3397, 1101, 471, 454, 314, 299, 315, 316, 450, 317, 3398... ## $ temperaturemax_daily <int> 84, 73, 84, 85, 86, 85, 87, 81, 83, 83, 83, 83, 80, 81, 84, 86, 84, 72, 85, 85, 85, 84... ## $ temperaturemin_daily <int> 65, 63, 56, 65, 63, 66, 66, 64, 63, 66, 62, 64, 66, 62, 62, 65, 66, 67, 62, 64, 65, 62... ## $ dewpointmax_daily <int> 68, NA, 65, 67, 68, 68, 68, 65, NA, NA, NA, NA, NA, NA, 69, 68, 69, 68, 69, 70, 67, 67... ## $ dewpointmin_daily <int> 63, NA, 56, 60, 62, 63, 61, 55, NA, NA, NA, NA, NA, NA, 62, 62, 61, 65, 61, 63, 62, 61... ## $ relhumidmax_daily <int> 94, NA, 99, 94, 96, 91, 92, 90, NA, NA, NA, NA, NA, NA, 102, 93, 88, 94, 99, 94, 94, 9... ## $ relhumidmin_daily <int> 50, NA, 39, 45, 48, 51, 43, 41, NA, NA, NA, NA, NA, NA, 51, 46, 49, 83, 51, 51, 48, 48... ## $ pressuremax_daily <dbl> 29.97, NA, 29.97, 29.96, 30.02, 30.03, 29.99, 30.04, NA, NA, 30.01, 30.04, NA, 30.00, ... ## $ pressuremin_daily <dbl> 29.86, NA, 29.86, 29.84, 29.91, 29.90, 29.88, 29.90, NA, NA, 29.91, 29.95, NA, 29.88, ... ## $ precip_daily <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,... ## $ windspmax_daily <int> 17, 32, 16, 12, 8, 13, 15, 21, 13, 12, 10, 14, 19, NA, 13, 10, 11, 15, 15, 10, 13, 13,... ## $ windspmaxdir_daily <chr> "SW", NA, "WSW", "NW", "W", "WSW", "WSW", "W", "S", NA, NA, NA, NA, NA, "SSW", "SSW", ... rvest的更改。切换到xml2应该是你进入肌肉记忆的东西(read_html()会在某些时候消失)。

答案 1 :(得分:0)

以下是另一个仅使用rveststringr个包的选项。在这些情况下,所有数据都存储在网页中,只需要提取出来。记录存储在{}中,第一个字段名称为&#34; city&#34;。此步骤需要一些跟踪和错误来定位记录以确定结构/顺序str_locate_all函数调用在非贪婪方法中执行此搜索。一旦提取了这些字符串(记录),只需解析每个字段的值并创建最终的数据帧。

library(rvest)
library(stringr)

#read web page
pg <- read_html("http://www.njweather.org/data/daily")

#find data of interest {"first field.*? }
#find data within brackets with the first field named city then any number of characters(.*) - not greedy(?)
#finds start and stop
recStartStop<-str_locate_all(pg, "\\{ \"city.*?\\}")[[1]]
#extract the records out from page
records<-str_sub(pg, recStartStop[,1]+1, recStartStop[,2]-1)

#replaces , within the string if necessary
#records<-gsub(", ", "_", records  )

#split and reshape the data in a data frame
records<-strsplit(gsub("\"", "", records), ',')
columnsNeeded<-length(records[[1]])
data<-sapply(strsplit(unlist(records), ":"), FUN=function(x){x[2]})
#if the number of fields is not consisten (columnsNeeded) this will error
df<-data.frame(matrix(data, ncol=columnsNeeded, byrow=TRUE))

#update column names
#name the column names
names(df)<-sapply(strsplit(unlist(records[1]), ":"), FUN=function(x){x[1]})

希望代码中的注释能够清楚地解释每一步。