Urllib不会返回请求的内容

时间:2018-03-30 11:40:48

标签: python web-scraping beautifulsoup python-requests urllib2

我有两个页面要废弃: url_1url_2

它们之间的唯一区别是url_1是第一页,而url_2是同一域的第三页。

我正在使用urrlib来阅读网址:

from urllib.request import urlopen
html_1 = urlopen(url_1).read()
html_2 = urlopen(url_2).read()

不幸的是html_2html_1的内容相同。 阅读时,我发现可能发生了这种情况,因为服务器将我视为机器人。出于这个原因,我使用request模块Beautiful Soup来解析页面:

import requests
from bs4 import BeautifulSoup
session = requests.Session()
headers = {"User-Agent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_5)AppleWebKit 537.36 (KHTML, like Gecko) Chrome", "Accept":"text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8"}

req_1 = session.get(url_1, headers=headers)
bsObj_1 = BeautifulSoup(req_1.text)
req_2 = session.get(url_2, headers=headers)
bsObj_2 = BeautifulSoup(req_2.text)

内容仍然相同。我该如何解决?

1 个答案:

答案 0 :(得分:1)

试试这个:

import requests
from bs4 import BeautifulSoup
import time

url_1 = 'https://www.zoekscholen.onderwijsinspectie.nl/zoek-en-vergelijk?searchtype=generic&zoekterm=&pagina=&filterSectoren=BVE'
url_2 = 'https://www.zoekscholen.onderwijsinspectie.nl/zoek-en-vergelijk?searchtype=generic&zoekterm=&pagina=3&filterSectoren=BVE'

headers = {"User-Agent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_5)AppleWebKit 537.36 (KHTML, like Gecko) Chrome",
            "Accept":"text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8"}

with requests.Session() as s:
    s.headers.update(headers)
    s.get('https://www.zoekscholen.onderwijsinspectie.nl/')
    req_1 = s.get(url_1)
    soup1 = BeautifulSoup(req_1.text, "lxml")
    print(soup1.find("div", {"id": "mainResults"}).find_all("h2")[0].text)
    time.sleep(1)
    req_2 = s.get(url_2)
    soup2 = BeautifulSoup(req_2.text, "lxml")
    print(soup2.find("div", {"id": "mainResults"}).find_all("h2")[0].text)

输出:

Resultaten 1 - 20 van 165

Resultaten 41 - 60 van 165