将标题添加到Scrapy Spider

时间:2019-02-14 21:30:50

标签: python scrapy

对于一个项目,我正在对某些搜索词进行大量的Scrapy请求。这些请求使用相同的搜索词,但使用不同的时间范围,如以下网址中的日期所示。

尽管URL引用的日期和页面不同,但我收到的值与所有请求的输出相同。看来脚本正在获取获得的第一个值,并将相同的输出分配给所有后续请求。

import scrapy

 class QuotesSpider(scrapy.Spider):
    name = 'quotes'
    allowed_domains = ['google.com']
    start_urls = ['https://www.google.com/search?q=Activision&biw=1280&bih=607&source=lnt&tbs=cdr%3A1%2Ccd_min%3A01%2F01%2F2004%2Ccd_max%3A12%2F31%2F2004&tbm=nws',
                  'https://www.google.com/search?q=Activision&biw=1280&bih=607&source=lnt&tbs=cdr%3A1%2Ccd_min%3A01%2F01%2F2005%2Ccd_max%3A12%2F31%2F2005&tbm=nws',
                  'https://www.google.com/search?q=Activision&biw=1280&bih=607&source=lnt&tbs=cdr%3A1%2Ccd_min%3A01%2F01%2F2006%2Ccd_max%3A12%2F31%2F2006&tbm=nws',
    ]

    def parse(self, response):
        item = {
            'search_title': response.css('input#sbhost::attr(value)').get(),
            'results': response.css('#resultStats::text').get(),
            'url': response.url,
        }
        yield item

我找到了一个线程discussing a similar problem with BeautifulSoup。解决方案是在此处向脚本添加标头,从而使其使用浏览器作为User-Agent:

headers = {
    "User-Agent":
        "Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/44.0.2403.157 Safari/537.36"
}
payload = {'as_epq': 'James Clark', 'tbs':'cdr:1,cd_min:01/01/2015,cd_max:01/01/2015', 'tbm':'nws'}
r = requests.get("https://www.google.com/search", params=payload, headers=headers)

在Scrapy seems to be different though中应用标头的方法。有谁知道如何最好地将它包含在Scrapy中,尤其是参考start_urls,它一次包含多个URL?

2 个答案:

答案 0 :(得分:3)

您无需在此处修改标题。您需要设置Scrapy允许您直接执行的用户代理

import scrapy

class QuotesSpider(scrapy.Spider):
    # ...
    user_agent = 'Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/44.0.2403.157 Safari/537.36'
    # ...

现在,您将获得如下输出:

'results': 'About 357 results', ...
'results': 'About 215 results', ...
'results': 'About 870 results', ...

答案 1 :(得分:0)

按照Scrapy 1.7.3 document。您的标头不会像其他标头一样通用。它应该与要抓取的站点相同。您将通过控制台网络标签了解标题。

按如下所示添加它们并打印响应。

# -*- coding: utf-8 -*-
import scrapy
#import logging

class AaidSpider(scrapy.Spider):
    name = 'aaid'

    def parse(self, response):
        url = "https://www.eventscribe.com/2019/AAOMS-CSIOMS/ajaxcalls/PresenterInfo.asp?efp=SVNVS1VRTEo4MDMx&PresenterID=597498&rnd=0.8680339"

        # Set the headers here. 
        headers =  {
            'Accept': '*/*',
            'Accept-Encoding': 'gzip, deflate, br',
            'Accept-Language': 'en-GB,en-US;q=0.9,en;q=0.8',
            'Connection': 'keep-alive',
            'Host': 'www.eventscribe.com',
            'Referer': 'https://www.eventscribe.com/2018/ADEA/speakers.asp?h=Browse%20By%20Speaker',
            'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.142 Safari/537.36',
            'X-Requested-With': 'XMLHttpRequest'
        }
# Send the request
        scrapy.http.Request(url, method='GET' , headers = headers,  dont_filter=False)

        print(response.body) #If the response is HTML
        #If the response is json ; import json
        #jsonresponse = json.loads(response.body_as_unicode())
        #print jsonresponse