我该怎么做才能启用cookie并对此网址使用scrapy?

时间:2017-06-15 05:10:04

标签: cookies scrapy scrapy-spider scrapy-shell

我正在使用scrapy进行使用此网址https://www.walmart.ca/en/clothing-shoes-accessories/men/mens-tops/N-2566+11

的剪贴簿项目

我尝试使用url并在shell中打开它,但它有430错误,所以我在标题中添加了一些设置:

scrapy shell -s COOKIES_ENABLED = 1 -s USER_AGENT =' Mozilla / 5.0(X11; Ubuntu; Linux x86_64; rv:46.0​​)Gecko / 20100101 Firefox / 46.0' " https://www.walmart.ca/en/clothing-shoes-accessories/men/mens-tops/N-2566+11"

它得到了页面" 200",但是一旦我使用了视图(响应),它就引导我到一个页面说: 抱歉! 您的网络浏览器不接受Cookie。

这是日志的屏幕截图: enter image description here

3 个答案:

答案 0 :(得分:1)

你应该

COOKIES_ENABLED = True

settings.py文件中。

另见

COOKIES_DEBUG = True

要调试cookie,您将看到每个响应/请求分别来自/传出的cookie。

答案 1 :(得分:0)

尝试发送所有必需的标题。

headers = {
    'dnt': '1',
    'accept-encoding': 'gzip, deflate, sdch, br',
    'accept-language': 'en-US,en;q=0.8',
    'upgrade-insecure-requests': '1',
    'user-agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.36',
    'accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8',
    'cache-control': 'max-age=0',
    'authority': 'www.walmart.ca',
    'cookie': 'JSESSIONID=E227789DA426B03664F0F5C80412C0BB.restapp-108799501-8-112264256; cookieLanguageType=en; deliveryCatchment=2000; marketCatchment=2001; zone=2; originalHttpReferer=; walmart.shippingPostalCode=V5M2G7; defaultNearestStoreId=1015; walmart.csrf=6f635f71ab4ae4479b8e959feb4f3e81d0ac9d91-1497631184063-441217ff1a8e4a311c2f9872; wmt.c=0; userSegment=50-percent; akaau_P1=1497632984~id=bb3add0313e0873cf64b5e0a73e3f5e3; wmt.breakpoint=d; TBV=7; ENV=ak-dal-prod; AMCV_C4C6370453309C960A490D44%40AdobeOrg=793872103%7CMCIDTS%7C17334',
    'referer': 'https://www.walmart.ca/en/clothing-shoes-accessories/men/mens-tops/N-2566+11',
}

yield Request(url = 'https://www.walmart.ca/en/clothing-shoes-accessories/men/mens-tops/N-2566+11', headers=headers)

您可以按照这样的方式实施,而不是使用start_urls我会推荐start_requests()方法。它易于阅读。

class EasySpider(CrawlSpider): 
    name = 'easy' 

    def start_requests(self):
        headers = {
        'dnt': '1',
        'accept-encoding': 'gzip, deflate, sdch, br',
        'accept-language': 'en-US,en;q=0.8',
        'upgrade-insecure-requests': '1',
        'user-agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.36',
        'accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8',
        'cache-control': 'max-age=0',
        'authority': 'www.walmart.ca',
        'cookie': 'JSESSIONID=E227789DA426B03664F0F5C80412C0BB.restapp-108799501-8-112264256; cookieLanguageType=en; deliveryCatchment=2000; marketCatchment=2001; zone=2; originalHttpReferer=; walmart.shippingPostalCode=V5M2G7; defaultNearestStoreId=1015; walmart.csrf=6f635f71ab4ae4479b8e959feb4f3e81d0ac9d91-1497631184063-441217ff1a8e4a311c2f9872; wmt.c=0; userSegment=50-percent; akaau_P1=1497632984~id=bb3add0313e0873cf64b5e0a73e3f5e3; wmt.breakpoint=d; TBV=7; ENV=ak-dal-prod; AMCV_C4C6370453309C960A490D44%40AdobeOrg=793872103%7CMCIDTS%7C17334',
        'referer': 'https://www.walmart.ca/en/clothing-shoes-accessories/men/mens-tops/N-2566+11',
        }       

        yield Request(url = 'https://www.walmart.ca/en/clothing-shoes-accessories/men/m‌​ens-tops/N-2566+11', callback = self.parse_item, headers = headers)

        def parse_item(self, response): 
            i = CravlingItem() 
            i['title'] = " ".join( response.xpath('//a/text()').extract()).strip() 
            yield i

答案 2 :(得分:0)

我可以确认COOKIES_ENABLED设置对修复错误没有帮助。 而是使用以下googlebot USER_AGENT使其起作用:

Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko; compatible; Googlebot/2.1; http://www.google.com/bot.html) Chrome/W.X.Y.Z‡ Safari/537.36 

我想出这一点要感谢制作此脚本的人,该脚本使用该用户代理发出请求:https://github.com/juansimon27/scrapy-walmart/blob/master/product_scraping/spiders/spider.py