使用 selenium 对 javascript 渲染的网站进行网页抓取

时间:2021-01-31 13:25:33

标签: python-3.x selenium-webdriver web-scraping beautifulsoup

如果这个问题看起来很新手,请原谅,因为我对它很陌生,但我已经尝试了很多方法但无法找到任何解决方案。 我试图刮这个 website 。下面是代码:-

import requests
import pandas as pd
import re
from bs4 import BeautifulSoup
from selenium import webdriver
import time
urls = []
for i in range(1,5):
    pages = "https://speta.org/home/directory-of-members/?type=companies&category%5B%5D=corporate-member&pg={0}&sort=a-z".format(i)
    urls.append(pages)
Data = []
options = webdriver.ChromeOptions()
options.add_argument('headless') 
browser = webdriver.Chrome(executable_path =r"C:/XXXXXX/XXXXXXX/chromedriver.exe", options=options)
links=[]
for info in urls:
    browser.get(info)
    time.sleep(10)
    elements = browser.find_element_by_xpath("//div[@class='1f-item 1f-item-default']/a")
    link = [elem.get_attribute('href') for elem in elements]
    links.append(link)
print(links)

错误:-

NoSuchElementException: Message: no such element: Unable to locate element: {"method":"xpath","selector":"//div[@class='1f-item 1f-item-default']/a"}
  (Session info: headless chrome=88.0.4324.104)

我确定我在此处识别正确标签时犯了一个错误, 任何建议将不胜感激!

谢谢!!

1 个答案:

答案 0 :(得分:0)

它对我有用。使用 browser.find_elements_by_xpath 查找多个元素。这里我使用了 browser.find_elements_by_css_selector 的返回列表。 你也可以使用这个xpath

browser.find_elements_by_xpath("//*[@id='c27-explore-listings']/section/div/div[2]/div[1]/div/div[1]/a")

这也行。

import requests
import pandas as pd
import re
from bs4 import BeautifulSoup
from selenium import webdriver
import time
urls = []
for i in range(1,5):
    pages = "https://speta.org/home/directory-of-members/?type=companies&category%5B%5D=corporate-member&pg={0}&sort=a-z".format(i)
    urls.append(pages)
Data = []
options = webdriver.ChromeOptions()
options.add_argument('headless')
browser = webdriver.Chrome(options=options)
links=[]
for info in urls:
    browser.get(info)
    time.sleep(10)
    elements = browser.find_elements_by_css_selector("div.lf-item.lf-item-default a")
    # elements = browser.find_elements_by_xpath("//*[@id='c27-explore-listings']/section/div/div[2]/div[1]/div/div[1]/a")
    link = [elem.get_attribute('href') for elem in elements]
    links.append(link)
print(links)