我正在尝试抓取以下页面:
https://metro.zakaz.ua/uk/?promotion=1
此页面包含反应内容。
我可以用代码抓取第一页:
url="https://metro.zakaz.ua/uk/?promotion=1"
read_html(url)%>%
html_nodes("script")%>%
.[[8]] %>%
html_text()%>%
fromJSON()%>%
.$catalog%>%.$items%>%
data.frame
结果,我拥有第一页上的所有项目,但我不知道如何抓取其他页面。
如果可以,此js代码将移至其他页面:
document.querySelectorAll('.catalog-pagination')[0].children[1].children[0].click()
感谢您的帮助!
答案 0 :(得分:2)
尝试稍微调整一下代码
from selenium import webdriver
driver = webdriver.Firefox()
current_page = 1
url="https://metro.zakaz.ua/uk/?promotion="+str(current_page)
driver.get(url)
#gets all elements with class page
pages = driver.find_elements_by_class_name("page")
for i in pages:
#it will update the code every time it has clicked on to a page button
html = driver.page_source
#Here you put your code for scrapping and that's it
#gets the next page and will continue till there are no more pages
if int(i.text) == current_page + 1:
i.click()
current_page +=1
答案 1 :(得分:2)
您将需要'RSelenum'来执行无头导航。
签出进行设置:How to set up rselenium for R?
library(RSelenium)
library(rvest)
library(tidyvers)
url="https://metro.zakaz.ua/uk/?promotion=1"
rD <- rsDriver(port=4444L, browser="chrome")
remDr <- rD[['client']]
remDr$navigate(url)
### adjust items you want to scrape
src <- remDr$getPageSource()[[1]]
pg <- read_html(src)
tbl <- tibble(
product_name = pg %>% html_nodes(".product-card-name") %>% html_text(),
product_info = pg %>% html_nodes(".product-card-info") %>% html_text()
)
## to handle pagenation (tested with 5 pages) - adjust accordinly
for (i in 2:5) {
pages <- remDr$findElement(using = 'css selector',str_c(".page:nth-child(",i,")"))
pages$clickElement()
## wait 5 sec to load
Sys.sleep(5)
src <- remDr$getPageSource()[[1]]
pg <- read_html(src)
data <- tibble(
product_name = pg %>% html_nodes(".product-card-name") %>% html_text(),
product_info = pg %>% html_nodes(".product-card-info") %>% html_text()
)
tbl <- tbl %>% bind_rows(data)
}
nrow(tbl)
head(tbl)
tail(tbl)
这是一个快速的输出: