我正在尝试对具有多个JavaScript呈现页面(https://openlibrary.ecampusontario.ca/catalogue/)的网站进行网络抓取。我可以从第一页获取内容,但是我不确定如何使我的脚本单击后续页面上的按钮以获取该内容。这是我的脚本。
import time
from bs4 import BeautifulSoup as soup
import requests
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
import json
# The path to where you have your chrome webdriver stored:
webdriver_path = '/Users/rawlins/Downloads/chromedriver'
# Add arguments telling Selenium to not actually open a window
chrome_options = Options()
chrome_options.add_argument('--headless')
chrome_options.add_argument('--window-size=1920x1080')
# Fire up the headless browser
browser = webdriver.Chrome(executable_path = webdriver_path,
chrome_options = chrome_options)
# Load webpage
url = "https://openlibrary.ecampusontario.ca/catalogue/"
browser.get(url)
# to ensure that the page has loaded completely.
time.sleep(3)
data = []
# Parse HTML, close browser
page_soup = soup(browser.page_source, 'lxml')
containers = page_soup.findAll("div", {"class":"result-item tooltip"})
for container in containers:
item = {}
item['type'] = "Textbook"
item['title'] = container.find('h4', {'class' : 'textbook-title'}).text.strip()
item['author'] = container.find('p', {'class' : 'textbook-authors'}).text.strip()
item['link'] = "https://openlibrary.ecampusontario.ca/catalogue/" + container.find('h4', {'class' : 'textbook-title'}).a["href"]
item['source'] = "eCampus Ontario"
item['base_url'] = "https://openlibrary.ecampusontario.ca/catalogue/"
data.append(item) # add the item to the list
with open("js-webscrape-2.json", "w") as writeJSON:
json.dump(data, writeJSON, ensure_ascii=False)
browser.quit()
答案 0 :(得分:1)
您不必实际单击任何按钮。例如,要搜索关键字为“ electricity”的商品,请导航至网址
https://openlibrary-repo.ecampusontario.ca/rest/filtered-items?query_field%5B%5D=*&query_op%5B%5D=matches&query_val%5B%5D=(%3Fi)electricity&filters=is_not_withdrawn&offset=0&limit=10000
这将返回项目的json字符串,其中第一个项目为:
{"items":[{"uuid":"6af61402-b0ec-40b1-ace2-1aa674c2de9f","name":"Introduction to Electricity, Magnetism, and Circuits","handle":"123456789/579","type":"item","expand":["metadata","parentCollection","parentCollectionList","parentCommunityList","bitstreams","all"],"lastModified":"2019-05-09 15:51:06.91","parentCollection":null,"parentCollectionList":null,"parentCommunityList":null,"bitstreams":null,"withdrawn":"false","archived":"true","link":"/rest/items/6af61402-b0ec-40b1-ace2-1aa674c2de9f","metadata":null}, ...
现在,要获取该项目,请使用其uuid,然后导航至:
https://openlibrary.ecampusontario.ca/catalogue/item/?id=6af61402-b0ec-40b1-ace2-1aa674c2de9f
对于与该网站的任何互动,您都可以像这样进行操作(这并不总是适用于所有网站,但是适用于您的网站)。
要了解当您单击诸如此类的按钮或输入文本(我对上述URL所做的操作)时导航到的URL,可以使用fiddler。
答案 1 :(得分:0)
我编写了一个可以帮助您(硒)的小脚本。
此脚本的作用是“未选择目录的最后一页(在这种情况下,在其类中包含'selected'),我将抓取,然后单击下一步”
while "selected" not in driver.find_elements_by_css_selector("[id='results-pagecounter-pages'] a")[-1].get_attribute("class"):
#your scrapping here
driver.find_element_by_css_selector("[id='next-btn']").click()
使用此方法可能会遇到一个问题,它不等待结果加载,但您可以从这里开始弄清楚该怎么做。
希望有帮助