在python脚本中将参数传递给scrapy spider

时间:2015-02-24 20:25:39

标签: python python-2.7 web-scraping scrapy scrapy-spider

我可以使用wiki中的以下配方在python脚本中运行抓取:

from twisted.internet import reactor
from scrapy.crawler import Crawler
from scrapy import log, signals
from testspiders.spiders.followall import FollowAllSpider
from scrapy.utils.project import get_project_settings

spider = FollowAllSpider(domain='scrapinghub.com')
settings = get_project_settings()
crawler = Crawler(settings)
crawler.signals.connect(reactor.stop, signal=signals.spider_closed)
crawler.configure()
crawler.crawl(spider)
crawler.start()
log.start()
reactor.run()

正如您所见,我可以将domain传递给FollowAllSpider,但我的问题是我如何通过start_urls(实际上是date将被添加使用上面的代码到我的蜘蛛类的固定网址?

这是我的蜘蛛类:

class MySpider(CrawlSpider):
    name = 'tw'
    def __init__(self,date):
        y,m,d=date.split('-') #this is a test , it could split with regex! 
        try:
            y,m,d=int(y),int(m),int(d)

        except ValueError:
            raise 'Enter a valid date'

        self.allowed_domains = ['mydomin.com']
        self.start_urls = ['my_start_urls{}-{}-{}'.format(y,m,d)]

    def parse(self, response):
        questions = Selector(response).xpath('//div[@class="result-link"]/span/a/@href') 
        for question in questions:
            item = PoptopItem()
            item['url'] = question.extract()
            yield item['url']

这是我的剧本:

from pdfcreator import convertor
from twisted.internet import reactor
from scrapy.crawler import Crawler
from scrapy import log, signals
#from testspiders.spiders.followall import FollowAllSpider
from scrapy.utils.project import get_project_settings
from poptop.spiders.stackoverflow_spider import MySpider
from poptop.items import PoptopItem

settings = get_project_settings()
crawler = Crawler(settings) 
crawler.signals.connect(reactor.stop, signal=signals.spider_closed)
crawler.configure()

date=raw_input('Enter the date with this format (d-m-Y) : ')
print date
spider=MySpider(date=date)
crawler.crawl(spider)
crawler.start()
log.start()
item=PoptopItem()

for url in item['url']:
    convertor(url)

reactor.run() # the script will block here until the spider_closed signal was sent

如果我只打印item我将收到以下错误:

2015-02-25 17:13:47+0330 [tw] ERROR: Spider must return Request, BaseItem or None, got 'unicode' in <GET test-link2015-1-17>

项目:

import scrapy


class PoptopItem(scrapy.Item):
    titles= scrapy.Field()
    content= scrapy.Field()
    url=scrapy.Field()

1 个答案:

答案 0 :(得分:8)

您需要修改__init__()构造函数以接受date参数。另外,我会使用datetime.strptime()来解析日期字符串:

from datetime import datetime

class MySpider(CrawlSpider):
    name = 'tw'
    allowed_domains = ['test.com']

    def __init__(self, *args, **kwargs):
        super(MySpider, self).__init__(*args, **kwargs) 

        date = kwargs.get('date')
        if not date:
            raise ValueError('No date given')

        dt = datetime.strptime(date, "%m-%d-%Y")
        self.start_urls = ['http://test.com/{dt.year}-{dt.month}-{dt.day}'.format(dt=dt)]

然后,您将以这种方式实例化蜘蛛:

spider = MySpider(date='01-01-2015')

或者,您甚至可以避免解析日期,首先传递datetime个实例:

class MySpider(CrawlSpider):
    name = 'tw'
    allowed_domains = ['test.com']

    def __init__(self, *args, **kwargs):
        super(MySpider, self).__init__(*args, **kwargs) 

        dt = kwargs.get('dt')
        if not dt:
            raise ValueError('No date given')

        self.start_urls = ['http://test.com/{dt.year}-{dt.month}-{dt.day}'.format(dt=dt)]

spider = MySpider(dt=datetime(year=2014, month=01, day=01))

而且,仅供参考,请参阅this answer作为有关如何从脚本运行Scrapy的详细示例。