访问深层次结构中的所有href-links

时间:2018-04-10 10:33:32

标签: python selenium dom web-scraping beautifulsoup

我正在尝试访问网站上的所有href-links,搜索结果是准确的。我的第一个目的是获取所有链接,然后进一步了解它。问题是 - >我从网站上获得了一些链接,但没有搜索结果的链接。这是我的代码的一个版本。

from selenium import webdriver
from htmldom import htmldom
dom = htmldom.HtmlDom("myWebsite")
dom = dom.createDom()

p_links = dom.find("a")
for link in p_links:
    print("URL: " +link.attr("href"))

这是该特定网站的HTML屏幕。在屏幕上,我标记了我将来尝试访问的href-link。我愿意接受任何帮助,无论是Selenium,htmldom,b4soup等。

enter image description here

2 个答案:

答案 0 :(得分:3)

您使用的数据加载了AJAX请求。因此,在获取页面源之后,您无法直接刮取它们。但是,AJAX请求被发送到此URL:

https://open.nrw/solr/collection1/select?q=*%3A*&fl=validated_data_dict%20title%20groups%20notes%20maintainer%20metadata_modified%20res_format%20author_email%20name%20extras_opennrw_spatial%20author%20extras_opennrw_groups%20extras_opennrw_format%20license_id&wt=json&fq=-type:harvest+&sort=title_string%20asc&indent=true&rows=20

以JSON格式返回数据。您可以使用requests模块来抓取这些数据。

import requests

BASE_URL = 'https://open.nrw/dataset/'

r = requests.get('https://open.nrw/solr/collection1/select?q=*%3A*&fl=validated_data_dict%20title%20groups%20notes%20maintainer%20metadata_modified%20res_format%20author_email%20name%20extras_opennrw_spatial%20author%20extras_opennrw_groups%20extras_opennrw_format%20license_id&wt=json&fq=-type:harvest+&sort=title_string%20asc&indent=true&rows=20')
data = r.json()
for item in data['response']['docs']:
    print(BASE_URL + item['name'])

输出:

https://open.nrw/dataset/mags-90-10-dezilsverhaeltnis-der-aequivalenzeinkommen-1512029759099
https://open.nrw/dataset/alkis-nutzungsarten-pro-baublock-wuppertal-w
https://open.nrw/dataset/allgemein-bildende-schulen-am-1510-nach-schulformen-schulen-schueler-und-lehrerbestand-w
https://open.nrw/dataset/altersgruppen-in-meerbusch-gesamt-meerb
https://open.nrw/dataset/amtliche-stadtkarte-wuppertal-raster-w
https://open.nrw/dataset/mais-anteil-abhaengig-erwerbstaetiger-mit-geringfuegiger-beschaeftigung-1477312040433
https://open.nrw/dataset/mags-anteil-der-stillen-reserve-nach-geschlecht-und-altersgruppen-1512033735012
https://open.nrw/dataset/mags-anteil-der-vermoegenslosen-in-nrw-nach-beruflicher-stellung-1512032087083
https://open.nrw/dataset/anzahl-kinderspielplatze-meerb
https://open.nrw/dataset/anzahl-der-sitzungen-von-rat-und-ausschussen-meerb
https://open.nrw/dataset/anzahl-medizinischer-anwendungen-den-oeffentlichen-baedern-duesseldorfs-seit-2006-d
https://open.nrw/dataset/arbeitslose-den-wohnquartieren-duesseldorf-d
https://open.nrw/dataset/arbeitsmarktstatistik-arbeitslose-gelsenkirchen-ge
https://open.nrw/dataset/arbeitsmarktstatistik-arbeitslose-nach-rechtskreisen-des-sgb-ge
https://open.nrw/dataset/arbeitsmarktstatistik-arbeitslose-nach-stadtteilen-gelsenkirchen-ge
https://open.nrw/dataset/arbeitsmarktstatistik-sgb-ii-rechtskreis-auf-stadtteilebene-gelsenkirchen-ge
https://open.nrw/dataset/arbeitsmarktstatistik-sozialversicherungspflichtige-auf-stadtteilebene-gelsenkirchen-ge
https://open.nrw/dataset/verkehrszentrale-arbeitsstellen-in-nordrhein-westfalen-1476688294843
https://open.nrw/dataset/mags-arbeitsvolumen-nach-wirtschaftssektoren-1512025235377
https://open.nrw/dataset/mais-armutsrisikoquoten-nach-geschlecht-und-migrationsstatus-der-personen-1477313317038

如您所见,这返回了前20个网址。首次加载页面时,只有20个项目存在。但是,如果向下滚动,则会加载更多。要获取更多项目,您可以更改URL中的查询字符串参数。该网址以rows=20结尾。您可以更改此数字以获得所需的结果数。

答案 1 :(得分:2)

由于AJAX请求导致初始页面加载后出现结果。

我设法获得了Selenium的链接,但是我必须等待加载.ckantitle a个元素(这些是您想要获得的链接)。

  

我应该提到webdriver将等待页面加载   默认。它不等待加载帧内或ajax   要求。这意味着当您使用.get(' url')时,您的浏览器将会等待   直到页面完全加载,然后转到下一个命令   代码。但是当你发布一个ajax请求时,webdriver却没有   等等,你有责任等待适量的   页面或部分页面加载的时间;所以有一个模块   命名为expected_conditions。

<强>代码:

from urllib.parse import urljoin

from bs4 import BeautifulSoup
from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
from selenium.common.exceptions import TimeoutException

url = 'https://open.nrw/suche'
html = None

browser = webdriver.Chrome()
browser.get(url)
delay = 3  # seconds

try:
    WebDriverWait(browser, delay).until(
        EC.presence_of_element_located((By.CSS_SELECTOR, '.ckantitle a'))
    )
    html = browser.page_source
except TimeoutException:
    print('Loading took too much time!')
finally:
    browser.quit()

if html:
    soup = BeautifulSoup(html, 'lxml')
    links = soup.select('.ckantitle a')
    for link in links:
        print(urljoin(url, link['href']))

您需要安装selenium:

pip install selenium

并获得一名司机here