Selenium- BS4:抓取网页时面临的问题

时间:2019-11-29 08:46:49

标签: python selenium-webdriver beautifulsoup

我想从下面的给定URL中抓取所有公司的公司信息,并查看其工作详细信息 网址:http://desiopt.com/search-results-jobs/

from selenium import webdriver
import bs4
import pandas as pd
from bs4 import BeautifulSoup
import re
driver =  webdriver.Chrome(executable_path=r"C:/Users/Chandra Sekhar/Desktop/chrome-driver/chromedriver.exe")
titles=[]
driver.get("http://desiopt.com/search-results-jobs/")
content = driver.page_source
soup = BeautifulSoup(content)
for a in soup.findAll('div',attrs={'class':'listing-links'}):
    info=a.find('div', attrs={'class':'userInfo'})
    print(info.text)
    titles.append(info.text)
    df = pd.DataFrame({'Company info':titles})
    df['Price'] = df['Price'].map(lambda x: re.sub(r'\W+', '', x))
    df.to_csv('products1.csv', index=False)

1 个答案:

答案 0 :(得分:0)

使用以下url

https://desiopt.com/search-results-jobs/?action=search&page=&listings_per_page=&view=list

这是您将编辑page=listings_per_page=的两个参数:

当前该网站确实有37091个职位。

经过测试,我确实发现listings_per_page每页受到1000的限制。

示例:https://desiopt.com/search-results-jobs/?action=search&page=1&listings_per_page=1000&view=list

因此,您需要从page=1page=38循环并设置listings_per_page=1000

这意味着每页1000条结果* 38页= 38000

之后

您将收集所有链接,并将其传递给list,条件是删除重复项,以防万一您担心sort。否则,只需将其传递给set即可,它不接受重复项,但不关心sort。然后,您可以解析urllist中的每个set来收集信息。

顺便说一句,我将遍历371页,每页包含100个项目,因此我将得到37100 url (如果最后一页的内容很少,则更少超过100个网址),然后从其中删除重复项,然后进行解析:

import requests
from bs4 import BeautifulSoup
import csv

links = []
try:
    for item in range(1, 372):
        print(f"Extraction Page# {item}")
        r = requests.get(
            f"https://desiopt.com/search-results-jobs/?action=search&page={item}&listings_per_page=100&view=list")
        if r.status_code == 200:
            soup = BeautifulSoup(r.text, 'html.parser')
            for item in soup.findAll('span', attrs={'class': 'captions-field'}):
                for a in item.findAll('a'):
                    a = a.get('href')
                    if a not in links:
                        links.append(a)
except KeyboardInterrupt:
    print("Good Bye!")
    exit()

data = []
try:
    for link in links:
        r = requests.get(link)
        if r.status_code == 200:
            soup = BeautifulSoup(r.text, 'html.parser')
            for item in soup.findAll('div', attrs={'class': 'compProfileInfo'}):
                a = [a.text.strip() for a in item.findAll('span')]
                if a[6] == '':
                    a[6] = 'N/A'
                data.append(a[0:7:2])
except KeyboardInterrupt:
    print("Good Bye!")
    exit()

while True:
    try:
        with open('output.csv', 'w+', newline='') as file:
            writer = csv.writer(file)
            writer.writerow(['Name', 'Phone', 'Email', 'Website'])
            writer.writerows(data)
            print("Operation Completed")
    except PermissionError:
        print("Please Close The File")
        continue
    except KeyboardInterrupt:
        print("Good Bye")
        exit()
    break

结果可以在这里查看: Click Here

  

输出为1885行,因为我让脚本在解析之前为公司删除了重复的links

在线运行代码:Click Here