我尝试从以下网站提取数据:
https://www.zomato.com/ncr/restaurants/north-indian
使用 R 编程,我是这个领域的学习者和初学者!
我试过这些:
> library(XML)
> doc<-htmlParse("the url mentioned above")
> Warning message:
> XML content does not seem to be XML: 'https://www.zomato.com/ncr/restaurants/north-indian'
这是一个...我也尝试了输出如下的readLines()
: -
> readLines("the URL as mentioned above") [i can't specify more than two links so typing this]
> Error in file(con, "r") : cannot open the connection
> In addition: Warning message:
> In file(con, "r") : unsupported URL scheme
据我所知,该页面不是XML,如错误所示,但是我从其他方面捕获来自此站点的数据......我确实尝试了整理html将其转换为XML或XHTML然后工作它,但我无处可去,也许我不知道使用整洁的HTML的实际过程呢! :( 不确定! 建议解决这个问题和纠正措施(如果有的话)?
答案 0 :(得分:3)
rvest
包也非常方便(并且构建在XML
包之上,以及其他包中):
library(rvest)
pg <- html("https://www.zomato.com/ncr/restaurants/north-indian")
# extract all the restaurant names
pg %>% html_nodes("a.result-title") %>% html_text()
## [1] "Bukhara - ITC Maurya " "Karim's "
## [3] "Gulati " "Dhaba By Claridges "
## ...
## [27] "Dum-Pukht - ITC Maurya " "Maal Gaadi "
## [29] "Sahib Sindh Sultan " "My Bar & Restaurant "
# extract the ratings
pg %>% html_nodes("div.rating-div") %>% html_text() %>% gsub("[[:space:]]", "", .)
## [1] "4.3" "4.1" "4.2" "3.9" "3.8" "4.1" "4.1" "3.4" "4.1" "4.3" "4.2" "4.2" "3.9" "3.8" "3.8" "3.4" "4.0" "3.7" "4.1"
## [20] "4.0" "3.8" "3.8" "3.9" "3.8" "4.0" "4.0" "4.7" "3.8" "3.8" "3.4"
答案 1 :(得分:2)
我建议getURL
包中的RCurl
来获取文档内容。然后我们可以用htmlParse
解析它。有时htmlParse
会遇到某些内容问题。在这种情况下,建议使用getURL
。
url <- "https://www.zomato.com/ncr/restaurants/north-indian"
library(RCurl)
library(XML)
content <- getURL(url)
doc <- htmlParse(content)
summary(doc)
# $nameCounts
#
# div a li span input article h3 meta
# 1337 362 232 212 33 30 30 27
# img script ul link section p br form
# 26 21 20 17 7 6 3 3
# body footer h1 head header html noscript ol
# 1 1 1 1 1 1 1 1
# strong textarea title
# 1 1 1
#
# $numNodes
# [1] 2377
另请注意,readLines
不支持https
,因此错误消息不那么令人震惊。