Scrapy + Selenium问题

时间:2017-08-26 15:35:20

标签: selenium web-scraping scrapy scrapy-spider

我正在尝试使用Selenium和Scrapy(请参阅下面的代码)搜索英国知名零售商的网站。我得到一个[scrapy.core.scraper] ERROR: Spider error processing并且不知道还能做什么(已经三个小时左右)。感谢您的支持!

import scrapy
from selenium import webdriver
from nl_scrape.items import NlScrapeItem
import time

class ProductSpider(scrapy.Spider):
    name = "product_spider"
    allowed_domains = ['newlook.com']
    start_urls = ['http://www.newlook.com/uk/womens/clothing/c/uk-womens-clothing?comp=NavigationBar%7Cmn%7Cwomens%7Cclothing#/?q=:relevance&page=1&sort=relevance&content=false']

def __init__(self):
    self.driver = webdriver.Safari()
    self.driver.set_window_size(800,600)
    time.sleep(4)

def parse(self, response):
    self.driver.get(response.url)
    time.sleep(4)

    # Collect products
    products = driver.find_elements_by_class_name('plp-item ng-scope')

    # Iterate over products; extract data and append individual features to NlScrapeItem
    for item in products:

        # Pull features
        desc = item.find_element_by_class_name('product-item__name link--nounderline ng-binding').text
        href = item.find_element_by_class_name('plp-carousel__img-link ng-scope').get_attribute('href')

        # Price Symbol removal and integer conversion
        priceString = item.find_element_by_class_name('price ng-binding').text
        priceInt = priceString.split('£')[1]
        price = float(priceInt)

        # Generate a product identifier
        identifier = href.split('/p/')[1].split('?comp')[0]
        identifier = int(identifier)

        # datetime
        dt = date.today()
        dt = dt.isoformat()

        # NlScrapeItem
        item = NlScrapeItem()

        # Append product to NlScrapeItem
        item['id'] = identifier
        item['href'] = href
        item['description'] = desc
        item['price'] = price
        item['firstSighted'] = dt
        item['lastSighted'] = dt
        yield item

    self.driver.close()
  

2017-08-26 15:48:38 [scrapy.core.scraper]错误:蜘蛛错误处理http://www.newlook.com/uk/womens/clothing/c/uk-womens-clothing?comp =导航栏%7Cmn%7Cwomens%7Cclothing#/ q =:相关性&安培;页= 1&安培;排序=关联&安培;含量=假> (引用者:无)

     

追踪(最近一次通话):     _runCallbacks中的文件“/Users/username/Documents/nl_scraping/nl_env/lib/python3.6/site-packages/twisted/internet/defer.py”,第653行       current.result = callback(current.result,* args,** kw)     文件“/Users/username/Documents/nl_scraping/nl_scrape/nl_scrape/spiders/product_spider.py”,第18行,解析       products = driver.find_elements_by_class_name('plp-item ng-scope')   NameError:名称'driver'未定义

1 个答案:

答案 0 :(得分:2)

所以你的代码有两个问题

def parse(self, response):
    self.driver.get(response.url)
    time.sleep(4)

    # Collect products
    products = driver.find_elements_by_class_name('plp-item ng-scope')

非常方便您将self.driver更改为driver。不这样做。您应该在函数顶部添加

def parse(self, response):
    driver = self.driver
    driver.get(response.url)
    time.sleep(4)

    # Collect products
    products = driver.find_elements_by_class_name('plp-item ng-scope')

接下来,您在函数末尾使用了self.driver.close()。因此,一旦处理完一个网址,您就会关闭浏览器。那是错的。所以删除该行。