如何从“每个页面”获取“页面链接”?

时间:2019-11-18 15:14:17

标签: web-scraping beautifulsoup python-requests web-crawler href

我想通过python3从“每个页面”中获取“每个页面链接”。

在我的代码中,“每个页面”的位置都位于BaseUrl中。而且,每个页面链接都位于我的代码中的正文中。

在哪里

BaseUrl = 'https://www.jobplanet.co.kr/companies?sort_by=review_compensation_cache&industry_id=700&page='

select body = #listCompanies > div > div.section_group > section:nth-child(1) > div > div > dl.content_col2_3.cominfo > dt > a'

plz,检查我的代码。我想收集每个页面上的每个链接,以使链接列表成为linkUrl。有什么问题吗?

from bs4 import BeautifulSoup
import csv
import os
import re
import requests
import json

# jobplanet
BaseUrl = 'https://www.jobplanet.co.kr/companies?sort_by=review_compensation_cache&industry_id=700&page='


for i in range(1, 5, 1):
        url = BaseUrl + str(i)
        r = requests.get(url)
        soup = BeautifulSoup(r.text,'lxml')
        body = soup.select('#listCompanies > div > div.section_group > section:nth-child(1) > div > div > dl.content_col2_3.cominfo > dt > a')
        #print(body)

        linkUrl = []
        for item in body:
            link = item.get('href')
            linkUrl.append(link)
print(linkUrl)

2 个答案:

答案 0 :(得分:1)

您选择的CSS选择器仅返回一条记录。我提供了更简单的css选择器,每页返回所有10条记录。

您需要在循环外部定义列表。

from bs4 import BeautifulSoup
import requests

linkUrl = []
BaseUrl = 'https://www.jobplanet.co.kr/companies?sort_by=review_compensation_cache&industry_id=700&page={}'
for i in range(1, 6):
    url = BaseUrl.format(i)
    r = requests.get(url)
    soup = BeautifulSoup(r.text,'lxml')
    links=soup.select(".us_titb_l3 >a")
    for item in links:
        link = item.get('href')
        linkUrl.append(link)

print(linkUrl)

答案 1 :(得分:1)

您的Css选择器有误,还添加了分页

from bs4 import BeautifulSoup
import csv
import os
import re
import requests
import json
from urllib import parse

# jobplanet
BaseUrl = 'https://www.jobplanet.co.kr/companies?sort_by=review_compensation_cache&industry_id=700&page={}'
source  =  requests.get(BaseUrl.format(1))
soup = BeautifulSoup(source.text,'lxml')
last_page_index = soup.select('a[class="btn_pglast"]') # getting the last page index 
last_page_index = int(last_page_index[0].get('href').split('page=')[1]) if last_page_index else 1
for i in range(1, last_page_index):
    print('## Getting Page {} out of {}'.format(i,last_page_index))
    if i > 1: # to avoid getting the same page again
        url = BaseUrl.format(i)
        r = requests.get(url)
        soup = BeautifulSoup(r.text,'lxml')
    body = soup.select('dt[class="us_titb_l3"] a')
    linkUrl = []
    for item in body:
        link = item.get('href')
        link = parse.urljoin(BaseUrl, link)
        linkUrl.append(link)
print(linkUrl)