刮擦错误先生自动

时间:2018-07-08 19:37:17

标签: python python-3.x web-scraping scrapy scrapy-spider

请给我帮助;) 我试图用Auto Mister Auto做些刮擦,但遇到下一个错误:

2018-07-08 21:24:37 [scrapy.utils.log] INFO: Scrapy 1.5.0 started (bot: mercado)

2018-07-08 21:24:37 [scrapy.utils.log] INFO: Versions: lxml 4.1.1.0, libxml2 2.9
.7, cssselect 1.0.1, parsel 1.2.0, w3lib 1.18.0, Twisted 17.5.0, Python 2.7.14 |
Anaconda, Inc.| (default, Nov  8 2017, 13:40:45) [MSC v.1500 64 bit (AMD64)], py
OpenSSL 17.5.0 (OpenSSL 1.0.2n  7 Dec 2017), cryptography 2.1.4, Platform Window
s-8-6.2.9200-SP0
   2018-07-08 21:24:37 [scrapy.crawler] INFO: Overridden settings: {'NEWSPIDER_MODU
LE': 'mercado.spiders', 'FEED_URI': 'file.csv', 'SPIDER_MODULES': ['mercado.spid
ers'], 'BOT_NAME': 'mercado', 'ROBOTSTXT_OBEY': True, 'FEED_FORMAT': 'csv'}
2018-07-08 21:24:37 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.feedexport.FeedExporter',
 'scrapy.extensions.logstats.LogStats',
 'scrapy.extensions.telnet.TelnetConsole',
 'scrapy.extensions.corestats.CoreStats']
    2018-07-08 21:24:38 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware',
'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
 'scrapy.downloadermiddlewares.retry.RetryMiddleware',
 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
 'scrapy.downloadermiddlewares.stats.DownloaderStats']
2018-07-08 21:24:38 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
 'scrapy.spidermiddlewares.referer.RefererMiddleware',
 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
 'scrapy.spidermiddlewares.depth.DepthMiddleware']
2018-07-08 21:24:38 [scrapy.middleware] INFO: Enabled item pipelines:
['mercado.pipelines.MercadoPipeline']
2018-07-08 21:24:38 [scrapy.core.engine] INFO: Spider opened
2018-07-08 21:24:38 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pag
es/min), scraped 0 items (at 0 items/min)
2018-07-08 21:24:38 [scrapy.extensions.telnet] DEBUG: Telnet console listening o
n 127.0.0.1:6023
2018-07-08 21:24:38 [scrapy.core.engine] DEBUG: Crawled (200) <GET 
https://www.mister-auto.es/robots.txt> (referer: None)
2018-07-08 21:24:39 [scrapy.core.engine] DEBUG: Crawled (200) <GET 
https://www.misterauto.es/global_search2.htmlidx=prod_monoindex_ESes&q=FEBI+BILSTEIN> (refe
rer: None)
2018-07-08 21:24:39 [scrapy.core.engine] INFO: Closing spider (finished)
2018-07-08 21:24:39 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 565,
 'downloader/request_count': 2,
 'downloader/request_method_count/GET': 2,
 'downloader/response_bytes': 20787,
 'downloader/response_count': 2,
 'downloader/response_status_count/200': 2,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2018, 7, 8, 19, 24, 39, 615000),
 'log_count/DEBUG': 3,
 'log_count/INFO': 7,
 'response_received_count': 2,
 'scheduler/dequeued': 1,
 'scheduler/dequeued/memory': 1,
 'scheduler/enqueued': 1,
 'scheduler/enqueued/memory': 1,
 'start_time': datetime.datetime(2018, 7, 8, 19, 24, 38, 201000)}
2018-07-08 21:24:39 [scrapy.core.engine] INFO: Spider closed (finished)

我不知道为什么我无法获得任何结果,而每次发射沙皮鱼时我都得到相同的结果,这是我的蜘蛛:

class MercadoSpider(CrawlSpider):
    name = 'mercado'
    item_count = 0
    allowed_domain = ['https://www.mister-auto.es']
    start_urls = ['https://www.mister-auto.es/global_search2.html? 
   idx=prod_monoindex_ESes&q=FEBI+BILSTEIN']

    rules = {

        Rule(LinkExtractor(allow =(), restrict_xpaths = ('//* 
   [@id="pagination2"]/ul/li[11]/a'))),
        Rule(LinkExtractor(allow =(), restrict_xpaths = 
('//div[@class="produit_header_name"]')),
                    callback = 'parse_item', follow = False)
}


def parse_item(self, response):
    ml_item = MercadoItem()

    #info de producto
    ml_item['articulo'] = response.xpath('normalize-space(//h1)').extract()
    ml_item['precio'] = response.xpath('normalize-space(//span[@class="prix"])').extract()
    self.item_count += 1
    yield ml_item   "

请你能帮我吗? 我试图更换蜘蛛但没有结果。

0 个答案:

没有答案
相关问题