无法使用请求获取网页中的某些字段

时间:2020-05-13 21:49:25

标签: python python-3.x web-scraping python-requests

我正在尝试使用requests模块从此webpage获取标题和不同容器的链接,但是我找不到任何方法。我试图找到通常在开发工具中显示的任何隐藏API,但是失败了。我注意到不同的时间,大多数脚本动态生成的内容在某些脚本标记中可用。但是,在这种情况下,我也找不到那里的内容。作为最后的手段,我利用硒来抓住它们。

from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.wait import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC

link = 'https://www.firmy.cz/kraj-praha?q=prodej+kol'

def get_content(url):
    driver.get(url)
    for item in wait.until(EC.visibility_of_all_elements_located((By.CSS_SELECTOR,'.companyDetail'))):
        item_link = item.find_element_by_css_selector("h3 > a.companyTitle").get_attribute("href")
        item_title = item.find_element_by_css_selector("span.title").text
        yield item_link,item_title

if __name__ == '__main__':
    with webdriver.Chrome() as driver:
        wait = WebDriverWait(driver,10)
        for item in get_content(link):
            print(item)

脚本产生的结果如下:

('https://www.firmy.cz/detail/12824790-bike-gallery-s-r-o-praha-vokovice.html', 'Bike Gallery s.r.o.')
('https://www.firmy.cz/detail/13162651-bikeprodejna-cz-praha-dolni-chabry.html', 'BIKEPRODEJNA.CZ')
('https://www.firmy.cz/detail/406369-bikestore-cz-praha-podoli.html', 'Bikestore.cz')
('https://www.firmy.cz/detail/12764331-shopbike-cz-praha-ujezd-nad-lesy.html', 'Shopbike.cz')

如何使用请求模块获取相同的结果?

1 个答案:

答案 0 :(得分:6)

分析了原始页面源之后,该解决方案似乎非常简单-您必须在链接中附加一个_escaped_fragment_= URL参数。例如,获取所需内容的简单Python脚本如下:

import requests
r = requests.get('https://www.firmy.cz/kraj-praha?q=prodej+kol&_escaped_fragment_=')
print (r.content)

以下Python脚本使用requests并解析收到的响应来模拟您当前的实现:

import requests
from bs4 import BeautifulSoup
from urllib.parse import urljoin

base = 'https://www.firmy.cz'
link = 'https://www.firmy.cz/kraj-praha?q=prodej+kol&_escaped_fragment_='

def get_info(url):
    r = requests.get(url)
    soup = BeautifulSoup(r.text,"lxml")
    for item in soup.select(".companyDetail"):
        item_link = urljoin(base,item.select_one("h3 > a.companyTitle")['href'])
        item_title = item.select_one("span.title").get_text(strip=True)
        yield item_link,item_title

if __name__ == '__main__':
    for item in get_info(link):
        print(item)

在执行之前,请通过在cmd中运行以下命令来确保已安装必需的库:

pip install bs4
pip install html5lib
pip install lxml