scrapy_splash.SplashRequest在scrapyd

时间:2017-01-29 14:31:46

标签: scrapy scrapyd scrapy-splash

当SplashRequest的回调被scrapyd执行时,我确实遇到了一些奇怪的行为(根据我的知识观点)。

Scrapy源代码

from scrapy.spiders.Spider import Spider
from scrapy import Request
import scrapy
from scrapy_splash import SplashRequest
class SiteSaveSpider(Spider):

    def __init__(self, domain='', *args, **kwargs):
        super(SiteSaveSpider, self).__init__(*args, **kwargs)
        self.start_urls = [domain]
        self.allowed_domains = [domain]
    name = "sitesavespider"


    def start_requests(self):
        for url in self.start_urls:
            yield SplashRequest(url, callback=self.parse, args={'wait':0.5})

            print "TEST after yield"

    def parse(self, response):
        print "TEST in parse"
        with open('/some_path/test.html', 'w') as f:
            for line in response.body:
                f.write(line)

记录内部Scrapy Crawler

回调解析函数在

启动时执行
scrapy crawl sitesavespider -a domain="https://www.facebook.com"
...
2017-01-29 14:12:37 [scrapy.core.engine] INFO: Spider opened
2017-01-29 14:12:37 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
TEST after yield
2017-01-29 14:12:55 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.facebook.com via http://127.0.0.1:8050/render.html> (referer: None)
TEST in parse
2017-01-29 14:12:55 [scrapy.core.engine] INFO: Closing spider (finished)
...

记录报废

当使用scrapyd启动同一个蜘蛛时,它将在SplashRequest之后直接返回:

>>>scrapyd.schedule("feedbot","sitesavespider",domain="https://www.facebook.com")
u'f2f4e090e62d11e69da1342387f8a0c9'

cat f2f4e090e62d11e69da1342387f8a0c9.log
... 
2017-01-29 14:19:34 [scrapy.core.engine] INFO: Spider opened
2017-01-29 14:19:34 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2017-01-29 14:19:58 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.facebook.com via http://127.0.0.1:8050/render.html> (referer: None)
2017-01-29 14:19:58 [scrapy.core.engine] INFO: Closing spider (finished)
...

有人知道这个问题还是可以帮我找错?

1 个答案:

答案 0 :(得分:1)

尝试在另一台计算机上重建问题之后,它就不再存在了,我无法证明它。对于其他人,尝试调试此类问题:

  • 自己的spider中的打印调用默认情况下不会被scrapyd输出到日志文件中,而是放入启动了scrapyd的终端中

2017-02-21 16:24:29+0100 [HTTPChannel,0,127.0.0.1] 127.0.0.1 - - [21/Feb/2017:15:24:28 +0000] "GET /listjobs.json?project=feedbot HTTP/1.1" 200 199 "-" "python-requests/2.2.1 CPython/2.7.6 Linux/3.13.0-86-generic"
2017-02-21 16:24:29+0100 [Launcher,17915/stdout] TEST after yield
TEST in parse
2017-02-21 16:24:29+0100 [HTTPChannel,0,127.0.0.1] 127.0.0.1 - - [21/Feb/2017:15:24:28 +0000] "GET /listjobs.json?project=feedbot HTTP/1.1" 200 199 "-" "python-requests/2.2.1 CPython/2.7.6 Linux/3.13.0-86-generic"