scrapy:以编程方式将参数传递给crawler

时间:2017-07-23 02:31:21

标签: python scrapy web-crawler

我正在做一个scrapy爬虫。我有一个python模块从数据库获取URL并应配置scrapy为每个URL启动一个蜘蛛。因为我从我的脚本开始scrapy,我不知道如何在命令行开关-a中传递它的参数,以便每个调用接收不同的URL。

这里是scrapy调用者的代码

def scrape_next_url() :

conn = _mysql.connect(host, username, password, database_name)
conn.query("select min(sortorder) from url_queue where processed = false for update")
query_result = conn.store_result()
url_index = query_result.fetch_row()[0][0]

conn.query("select url from url_queue where sortorder = " + str(url_index))
query_result = conn.store_result()
url_at_index = query_result.fetch_row()[0][0]

conn.query("update url_queue set processed = true where sortorder = " + str(url_index))
conn.commit()
conn.close()

settings = Settings()
os.environ['SCRAPY_SETTINGS_MODULE'] = 'webscraper.settings'
settings_module_path = os.environ['SCRAPY_SETTINGS_MODULE']
settings.setmodule(settings_module_path, priority='project')

process = CrawlerProcess(settings)
ImageSpider.start_urls.append(url_at_index)
process.crawl(ImageSpider)
process.start()

帮助!

注意:我遇到了一个问题(Scrapy: Pass arguments to cmdline.execute()),但如果可能的话,我想以编程方式进行。

编辑:

我已按照您的建议操作,并提供以下蜘蛛代码:

    def __init__(self, url=None, *pargs, **kwargs) :
       super(ImageSpider, self).__init__(*pargs, **kwargs)
       self.start_urls.append(url.strip())

在我的来电者身上:

    process = CrawlerProcess(settings)
    process.crawl(ImageSpider, url=url_at_index)

我知道该参数正在传递给init,因为如果没有url.strip()调用失败。但结果是蜘蛛运行但不爬行:

(webcrawler) faisca:webscraper dlsa$ python scraper_launcher.py 
2017-07-25 00:42:16 [scrapy.utils.log] INFO: Scrapy 1.4.0 started (bot: webscraper)
2017-07-25 00:42:16 [scrapy.utils.log] INFO: Overridden settings: {'BOT_NAME': 'webscraper', 'NEWSPIDER_MODULE': 'webscraper.spiders', 'SPIDER_MODULES': ['webscraper.spiders']}
2017-07-25 00:42:16 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
 'scrapy.extensions.telnet.TelnetConsole',
 'scrapy.extensions.logstats.LogStats',
 'scrapy.extensions.memusage.MemoryUsage']
2017-07-25 00:42:16 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
 'scrapy.downloadermiddlewares.retry.RetryMiddleware',
 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
 'scrapy.downloadermiddlewares.stats.DownloaderStats']
2017-07-25 00:42:16 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
 'scrapy.spidermiddlewares.referer.RefererMiddleware',
 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
 'scrapy.spidermiddlewares.depth.DepthMiddleware']
2017-07-25 00:42:16 [scrapy.middleware] INFO: Enabled item pipelines:
['webscraper.pipelines.WebscraperPipeline']
2017-07-25 00:42:16 [scrapy.core.engine] INFO: Spider opened
2017-07-25 00:42:16 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2017-07-25 00:42:16 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6023

1 个答案:

答案 0 :(得分:0)

传递像这样的参数

process.crawl(MySpider(), limit=query_to_run, cursor=cursor, conn=conn)

然后在你的蜘蛛

import from scrapy.spiders import CrawlSpider

class MySpider(CrawlSpider):
    # some code here
    def __init__(self, limit=None, cursor=None, conn=None, *args, **kwargs):
            super(MySpider, self).__init__(*args, **kwargs)