附加cookie以请求

时间:2016-04-14 11:34:41

标签: python scrapy

我正试图从Castorama网站获得工具的价格。但到目前为止,我在构建正确的请求时遇到了问题。

http://www.castorama.pl/produkty/narzedzia-i-artykuly/elektronarzedzia-przenosne-i-akcesoria/szlifierki-i-polerki/szlifierki-oscylacyjne/szlifierka-oscylacyjna-pp-110w.html

不幸的是,这并不容易。价格取决于商店的本地化。在获得价格之前,您需要定义您的商店本地化。在网站上,我点击'ZOBACZ CENĘ'(右侧的黄色框)。后来我在中间字段f.e填写了我的邮政编码。 '05-123',然后点击右侧的'SZUKAJ PO KODZIE'按钮。最后,我点击弹出框中'USTAW'的黄色弹出按钮。

由于这个原因,我得到了期望的产品价格。我想用scrappy复制这种行为。要在Chrome开发者工具中执行此操作,我检查了网络标签和XHR标签,以确定负责获取价格的请求。我认为合适的是'getProductPriceStockByStore/'

请求

URL:http://www.castorama.pl/bold_all/data/getProductPriceStockByStore/
Request Method:POST
Status Code:200 OK
Remote Address:109.205.50.98:80

请求标题

Accept:text/javascript, text/html, application/xml, text/xml, */*
Accept-Encoding:gzip, deflate
Accept-Language:en-GB,en;q=0.8,pl;q=0.6
Connection:keep-alive
Content-Length:39
Content-type:application/x-www-form-urlencoded; charset=UTF-8
Cookie:selected_shop_flag=3; CACHED_FRONT_FORM_KEY=2MxQx5N1GeBOoDFl; localizationPopup=1; selected_shop=1; selected_shop_store_view=8002; bold_wishlist=3lg7qtm3teba7s1sbfg77hi352; frontend=3lg7qtm3teba7s1sbfg77hi352; VIEWED_PRODUCT_IDS=30052; cSID_VM=1460629378710; _ga=GA1.2.91284606.1460626559; _ceg.s=o5mcub; _ceg.u=o5mcub; _dc_gtm_UA-27193958-1=1
Host:www.castorama.pl
Origin:http://www.castorama.pl
Referer:http://www.castorama.pl/produkty/narzedzia-i-artykuly/elektronarzedzia-przenosne-i-akcesoria/szlifierki-i-polerki/szlifierki-oscylacyjne/szlifierka-oscylacyjna-pp-110w.html
User-Agent:Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Ubuntu Chromium/49.0.2623.108 Chrome/49.0.2623.108 Safari/537.36

表格数据:

isAjax:true
product_id:30052
store:8002

响应:

{"products":{"30052":{"price":"93.98","qty":"7.00","stock_status":1,"html":"in"}},"store":"8002","templates":{"in":"<span><span class=\"in-stock\">Dost\u0119pny<\/span><\/span>","out":"<span><span class=\"out-of-stock\">Niedost\u0119pny<\/span><\/span>","phone":"<span><span class=\"low-stock\">Na zam\u00f3wienie<\/span><\/span>","backorder":"<span><span class=\"backorder-stock\">Na zam\u00f3wienie<\/span><\/span>"},"status":true}

所以我转向Scrapy来实现该问题的解决方案。我决定创建一个带有cookie的post请求,其中包含与上面类似的标题:

import scrapy
from Castorama.items import CastoramaItem

class DmozSpider(scrapy.Spider):
    name = "Castorama"
    allowed_domains = ["castorama.pl"]
    start_urls = ["http://www.castorama.pl/bold_all/data/getProductPriceStockByStore/"]

    def start_Request(self):

        req=scrapy.Request(start_urls[0]
            , method='POST'
            , cookies ={'selected_shop_flag':3,
                'CACHED_FRONT_FORM_KEY':'2MxQx5N1GeBOoDFl',
                'selected_shop':1,
                'selected_shop_flag':3,
                'selected_shop_store_view':8002,
                'VIEWED_PRODUCT_IDS':30052,
                'frontend':'3lg7qtm3teba7s1sbfg77hi352',
                'cSID_VM':1460626558358}
            ,callback='Rozkoduj'
            )
        yield req

    def Rozkoduj(self, response):
        print response.body

但是我对这段代码很不满意。我的控制台日志:

2016-04-14 12:54:09 [scrapy] INFO: Scrapy 1.0.5 started (bot: Castorama)
2016-04-14 12:54:09 [scrapy] INFO: Optional features available: ssl, http11, boto
2016-04-14 12:54:09 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'Castorama.spiders', 'SPIDER_MODULES': ['Castorama.spiders'], 'BOT_NAME': 'Castorama'}
2016-04-14 12:54:09 [scrapy] INFO: Enabled extensions: CloseSpider, TelnetConsole, LogStats, CoreStats, SpiderState
2016-04-14 12:54:09 [boto] DEBUG: Retrieving credentials from metadata server.
2016-04-14 12:54:10 [boto] ERROR: Caught exception reading instance data
Traceback (most recent call last):
  File "/home/michal/anaconda2/lib/python2.7/site-packages/boto/utils.py", line 210, in retry_url
    r = opener.open(req, timeout=timeout)
  File "/home/michal/anaconda2/lib/python2.7/urllib2.py", line 431, in open
    response = self._open(req, data)
  File "/home/michal/anaconda2/lib/python2.7/urllib2.py", line 449, in _open
    '_open', req)
  File "/home/michal/anaconda2/lib/python2.7/urllib2.py", line 409, in _call_chain
    result = func(*args)
  File "/home/michal/anaconda2/lib/python2.7/urllib2.py", line 1227, in http_open
    return self.do_open(httplib.HTTPConnection, req)
  File "/home/michal/anaconda2/lib/python2.7/urllib2.py", line 1197, in do_open
    raise URLError(err)
URLError: <urlopen error timed out>
2016-04-14 12:54:10 [boto] ERROR: Unable to read instance data, giving up
2016-04-14 12:54:10 [scrapy] INFO: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, RedirectMiddleware, CookiesMiddleware, ChunkedTransferMiddleware, DownloaderStats
2016-04-14 12:54:10 [scrapy] INFO: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2016-04-14 12:54:10 [scrapy] INFO: Enabled item pipelines: 
2016-04-14 12:54:10 [scrapy] INFO: Spider opened
2016-04-14 12:54:10 [scrapy] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2016-04-14 12:54:10 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6023
2016-04-14 12:54:10 [scrapy] DEBUG: Crawled (200) <GET http://www.castorama.pl/bold_all/data/getProductPriceStockByStore/> (referer: None)
2016-04-14 12:54:10 [scrapy] ERROR: Spider error processing <GET http://www.castorama.pl/bold_all/data/getProductPriceStockByStore/> (referer: None)
Traceback (most recent call last):
  File "/home/michal/anaconda2/lib/python2.7/site-packages/twisted/internet/defer.py", line 588, in _runCallbacks
    current.result = callback(current.result, *args, **kw)
  File "/home/michal/anaconda2/lib/python2.7/site-packages/scrapy/spiders/__init__.py", line 76, in parse
    raise NotImplementedError
NotImplementedError
2016-04-14 12:54:10 [scrapy] INFO: Closing spider (finished)
2016-04-14 12:54:10 [scrapy] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 256,
 'downloader/request_count': 1,
 'downloader/request_method_count/GET': 1,
 'downloader/response_bytes': 311,
 'downloader/response_count': 1,
 'downloader/response_status_count/200': 1,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2016, 4, 14, 10, 54, 10, 776463),
 'log_count/DEBUG': 3,
 'log_count/ERROR': 3,
 'log_count/INFO': 7,
 'response_received_count': 1,
 'scheduler/dequeued': 1,
 'scheduler/dequeued/memory': 1,
 'scheduler/enqueued': 1,
 'scheduler/enqueued/memory': 1,
 'spider_exceptions/NotImplementedError': 1,
 'start_time': datetime.datetime(2016, 4, 14, 10, 54, 10, 477689)}
2016-04-14 12:54:10 [scrapy] INFO: Spider closed (finished)

这是我最后的问题。我的做法是否正确?我应该尝试将此cookie附加到请求中,如上面的代码吗?或者我应该尝试完全不同的方式。最后,如果我朝着正确的方向前进,我应该在代码中更改哪些内容以创建正确的请求?

提前感谢您的帮助。

Pawel Miech修正后蜘蛛的更新版本。它更好,因为请求工作但我仍然没有得到适当的响应。

import scrapy
from Castorama.items import CastoramaItem

class DmozSpider(scrapy.Spider):
    name = "Castorama"
    allowed_domains = ["castorama.pl"]
    start_urls=['http://www.castorama.pl']

    def parse(self, response):
        start_urls = ["http://www.castorama.pl/bold_all/data/getProductPriceStockByStore/"]
        req=scrapy.Request(start_urls[0]
            , method='POST'
            , cookies ={'selected_shop_flag':3,
                'CACHED_FRONT_FORM_KEY':'2MxQx5N1GeBOoDFl',
                'selected_shop':1,
                'selected_shop_flag':3,
                'selected_shop_store_view':8002,
                'VIEWED_PRODUCT_IDS':30052,
                'frontend':'3lg7qtm3teba7s1sbfg77hi352',
                'cSID_VM':1460626558358}
            ,callback=self.rozkoduj
            )
        yield req

    def rozkoduj(self, response):
        print '>>>>>>>>>'
        print response.body

1 个答案:

答案 0 :(得分:1)

Scrapy请求是异步的。每个请求都必须有回调。如果没有回调,则将其设置为spider.parse方法。如果没有spider.parse方法,您会在此堆栈跟踪中看到NotImplementedError

   Traceback (most recent call last):
      File "/home/michal/anaconda2/lib/python2.7/site-packages/twisted/internet/defer.py", line 588, in _runCallbacks
        current.result = callback(current.result, *args, **kw)
      File "/home/michal/anaconda2/lib/python2.7/site-packages/scrapy/spiders/__init__.py", line 76, in parse
        raise NotImplementedError
    NotImplementedError

所以从为POST添加适当的回调开始(必须引用spider方法而不是字符串,例如self.rozkoduj而不是&#34; Rozkoduj&#34;)。

Urlopen错误是从boto发送的,如果您没有配置s3,则会发出错误,但它很难看,但可能会被忽略,直到有人在Scrapy核心中修复此票证。

  

这是我最后的问题。我的做法是否正确?我应该尝试将此cookie附加到请求中,如上面的代码吗?

答案就像往常一样:这取决于。如果您只关心为一个请求发送cookie,那么您的方法是正确的。 但是如果您希望在蜘蛛发送的所有请求中包含这些Cookie,包括从回调发送到POST的请求,则必须向cookiejar添加Cookie。在cookiejar中设置cookie实际上并不容易,这里有更简单的方法:https://github.com/scrapy/scrapy/issues/1878

简而言之,设置cookie到cookiejar是做某事的事情(这只是伪代码和指针):

# must be cookielib.Cookie object
# must pass all kwargs for cookielib.Cookie
cookie = Cookie(**kwargs)
# get cookiejar object it is stored in cookie middleware
all_mw = spider.crawler.engine.downloader.middleware.middlewares 
# find cookie middleware there
cookiejar = cookie_middleware._cookiejars[None]
cookiejar.set_cookie(cookie)