网页抓取-转到第2页

时间:2019-06-29 17:32:34

标签: python web-scraping beautifulsoup

如何进入第二个数据集?无论我做什么,它只会返回第1页。

import bs4
from urllib.request import urlopen as uReq
from bs4 import BeautifulSoup as soup

myURL = 'https://jobs.collinsaerospace.com/search-jobs/'

uClient = uReq(myURL)
page_html = uClient.read()
uClient.close()

page_soup = soup(page_html, "html.parser")
container = page_soup.findAll("section", {"id":"search-results"}, {"data-current-page":"4"})


for child in container:
    for heading in child.find_all('h2'):
        print(heading.text)

3 个答案:

答案 0 :(得分:2)

该站点实际上使用JSON返回包含所有条目的HTML。用于此目的的API允许指定页码以及每个页面要返回的记录数,增加此数将进一步提高速度。

返回的JSON包含3个密钥。过滤信息,结果HTML和用于指示是否返回作业的标志。最后一个条目可用于表示到达页面末尾的时间。

您可能想看看非常流行的Python requests库,它简化了为您生成正确的URL的过程,而且速度也很快。

import bs4
import requests
from bs4 import BeautifulSoup as soup

params = {            
    "CurrentPage" : 1,
    "RecordsPerPage" : 100,
    "SearchResultsModuleName" : "Search Results",
    "SearchFiltersModuleName" : "Search Filters",
    "SearchType" : 5,
}

myURL = 'https://jobs.collinsaerospace.com/search-jobs/results'
page = 1
more_jobs = True

while more_jobs:
    print(f"\nPage {page}")
    params['CurrentPage'] = page
    req = requests.get(myURL, params=params)
    json = req.json()
    page_soup = soup(json['results'], "html.parser")
    container = page_soup.findAll("section", {"id":"search-results"}, {"data-current-page":"4"})

    for child in container:
        for heading in child.find_all('h2'):
            print(heading.text)

    more_jobs = json['hasJobs']  # Did this return any jobs?
    page += 1

答案 1 :(得分:1)

尝试:

import bs4
from urllib.request import urlopen as uReq
from bs4 import BeautifulSoup as soup


for letter in range(10):

    myURL = 'https://jobs.collinsaerospace.com/search-jobs/'+ str(letter) + ' '

    uClient = uReq(myURL)
    page_html = uClient.read()
    uClient.close()

    page_soup = soup(page_html, "html.parser")
    container = page_soup.findAll("section", {"id":"search-results"}, {"data-current-page":"4"})


    for child in container:
        for heading in child.find_all('h2'):
            print(heading.text)

输出: 之前3页:

0
SYSTEMS / APPLICATIONS ENGINEER
Data Scientist
Sr Engineer, Drafter/Product Definition
Finance and Accounting Intern
Senior Software Engineer - CT3
Intern Manufacturing Engineer
Staff Eng., Reliability Engineering
Software Developer
Configuration Management Specialist
Disassembler I--2nd Shift
Disassembler I--3rd Shift
Manager, Supplier Performance
Manager, Supplier Performance
Assoc Eng, Mfg Engrg-Ops, ME P1
Manager, Supplier Performance
1
Assembly Operator (UK7014) 1 1 1 1
Senior Administrator (DF1040) 1 1 1
Tester 1
Assembler 1
Assembler 1
Finisher 1
Painter 1
Technician 1 Manufacturing/Operations
Assembler 1 - 1st Shift
Supply Chain Analyst 1
Assembler (W7006) 1
Assembler (W7006) 1
Supplier Quality Engineer 1
Supplier Inspection Engineer 1
Assembler 1 - 1st Shift
2
Assembler I-FAA-2
Senior/Business Analyst-2
Operational Technical Support Level 2
Project Engineer - 2 – EMU Program
Line & Surface Plate Inspector Class 2
Software Engineer (LVL 2) - Embedded UAV Controls
Software Engineer (LVL 2 / JAVA) - Air Combat Training
Software  Engineer (Level 2) - Mission Simulation & Training
Electrical Engineer (LVL 2) - Mission Systems Design Tools
Quality Inspector II
GET/PGET
GET/PGET
Production Supervisor - 2nd shift
Software Developer
Trainee Operator/ Operator

答案 2 :(得分:1)

尝试使用以下脚本从感兴趣的任何页面获取结果。您要做的就是根据需要更改范围。我本可以定义一个while循环来耗尽全部内容,但这不是您提出的问题。

import requests
from bs4 import BeautifulSoup

link = 'https://jobs.collinsaerospace.com/search-jobs/results?'

params = {
'CurrentPage': '',
'RecordsPerPage': 15,
'Distance': 50,
'SearchResultsModuleName': 'Search Results',
'SearchFiltersModuleName': 'Search Filters',
'SearchType': 5
}
for page in range(1,5):  #This is where you change the range to get the results from whatever page you want
    params['CurrentPage'] = page
    res = requests.get(link,params=params)
    soup = BeautifulSoup(res.json()['results'],"lxml")
    for name in soup.select("h2"):
        print(name.text)