如何使用Selenium和BeautifulSoup更快地抓取图片?

时间:2019-10-14 07:50:08

标签: python selenium beautifulsoup webdriver

多亏了这些漂亮的人的帮助,我才能够整理一些代码来抓取网页。由于页面的动态性质,我不得不使用Selenium,因为BeautifulSoup仅在必须刮静态页面时才能单独使用。

一个缺点是打开页面,等待打开弹出窗口和引入输入的整个过程要花费大量时间。时间在这里是个问题,因为我必须抓取大约1000页(每个邮政编码1页),这大约需要10个小时。

如何优化代码,以使此操作不会花费这么长时间?

我将在下面保留完整的代码和邮政编码列表以供复制。

from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from bs4 import BeautifulSoup
import time
import pandas as pd

time_of_day=[]
price=[]
Hours=[]
day=[]
disabled=[]
location=[]

danishzip = pd.read_excel (r'D:\Danish_ZIPs.xlsx')

for i in range(len(danishzip)):
    try:
        zipcode = danishzip['Zip'][i]

        driver = webdriver.Chrome(executable_path = r'C:\Users\user\lib\chromedriver_77.0.3865.40.exe')
        wait = WebDriverWait(driver,10)
        driver.maximize_window()
        driver.get("https://www.nemlig.com/")

        wait.until(EC.visibility_of_element_located((By.CSS_SELECTOR, ".timeslot-prompt.initial-animation-done")))
        wait.until(EC.element_to_be_clickable((By.CSS_SELECTOR, "input[type='tel'][class^='pro']"))).send_keys(str(zipcode))
        wait.until(EC.element_to_be_clickable((By.CSS_SELECTOR, ".btn.prompt__button"))).click()

        time.sleep(3)
        soup=BeautifulSoup(driver.page_source,'html.parser')


        for morn,d in zip(soup.select_one('[data-automation="beforDinnerRowTmSlt"]').select('.time-block__time'),soup.select_one('[data-automation="beforDinnerRowTmSlt"]').select('.time-block__item')):
            location.append(soup.find('span', class_='zipAndCity').text)
            time_of_day.append(soup.select_one('[data-automation="beforDinnerRowTmSlt"] > .time-block__row-header').text)
            Hours.append(morn.text)
            price.append(morn.find_next(class_="time-block__cost").text)
            day.append(soup.select_one('.date-block.selected [data-automation="dayNmTmSlt"]').text + " " + soup.select_one('.date-block.selected [data-automation="dayDateTmSlt"]').text)
            if 'disabled' in d['class']:
                disabled.append('1')
            else:
                disabled.append('0')

        for after,d in zip(soup.select_one('[data-automation="afternoonRowTmSlt"]').select('.time-block__time'),soup.select_one('[data-automation="afternoonRowTmSlt"]').select('.time-block__item')):
            location.append(soup.find('span', class_='zipAndCity').text)
            time_of_day.append(soup.select_one('[data-automation="afternoonRowTmSlt"] > .time-block__row-header').text)
            Hours.append(after.text)
            price.append(after.find_next(class_="time-block__cost").text)
            day.append(soup.select_one('.date-block.selected [data-automation="dayNmTmSlt"]').text + " " + soup.select_one('.date-block.selected [data-automation="dayDateTmSlt"]').text)
            if 'disabled' in d['class']:
                disabled.append('1')
            else:
                disabled.append('0')

        for evenin,d in zip(soup.select_one('[data-automation="eveningRowTmSlt"]').select('.time-block__time'),soup.select_one('[data-automation="eveningRowTmSlt"]').select('.time-block__item')):
            location.append(soup.find('span', class_='zipAndCity').text)
            time_of_day.append(soup.select_one('[data-automation="eveningRowTmSlt"] > .time-block__row-header').text)
            Hours.append(evenin.text)
            price.append(evenin.find_next(class_="time-block__cost").text)
            day.append(soup.select_one('.date-block.selected [data-automation="dayNmTmSlt"]').text + " " + soup.select_one('.date-block.selected [data-automation="dayDateTmSlt"]').text)
            if 'disabled' in d['class']:
                disabled.append('1')
            else:
                disabled.append('0')

        df = pd.DataFrame({"time_of_day":time_of_day,"Hours":Hours,"price":price,"Day":day,"Disabled" : disabled, "Location": location})
        print(df)
        driver.close()
    except Exception:
        time_of_day.append('No Zipcode')
        location.append('No Zipcode')
        Hours.append('No Zipcode')
        price.append('No Zipcode')
        day.append('No Zipcode')
        disabled.append('No Zipcode')
        df = pd.DataFrame({"time_of_day":time_of_day,"Hours":Hours,"price":price,"Day":day,"Disabled" : disabled, "Location": location})
        driver.close()

邮政编码列表:https://en.wikipedia.org/wiki/List_of_postal_codes_in_Denmark

2 个答案:

答案 0 :(得分:3)

您需要的只是一个简单请求,以json格式获取所有信息:

import requests

headers = {
    'sec-fetch-mode': 'cors',
    'dnt': '1',
    'pragma': 'no-cache',
    'accept-encoding': 'gzip, deflate, br',
    'user-agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_0) AppleWebKit/537.36 (KHTML, like Gecko) '
                  'Chrome/77.0.3865.120 Safari/537.36',
    'accept': 'application/json, text/plain, */*',
    'cache-control': 'no-cache',
    'authority': 'www.nemlig.com',
    'referer': 'https://www.nemlig.com/',
    'sec-fetch-site': 'same-origin',
}

response = requests.get('https://www.nemlig.com/webapi/v2/Delivery/GetDeliveryDays?days=8', headers=headers)

json_data = response.json()

例如,您可以将days=参数更改为20,并获取20天数据。

答案 1 :(得分:1)

硒不适合网页抓取。

尝试查找nemlig.com的内部api 。 与其等待JS处理,不如找到返回所需数据的http端点。您可以使用浏览器中的开发人员工具或Burp Suite等工具来做到这一点。

之后,仅使用request / urllib收获它。

https://ianlondon.github.io/blog/web-scraping-discovering-hidden-apis/