scrapy start_requests没有进入回调函数

时间:2016-05-09 07:50:56

标签: python callback scrapy python-requests

我不知道为什么没有为start_requests urls调用回调函数(parse)。它会在没有进入解析函数的情况下被终止。

这是我的cbrspider.py文件

class CbrSpider(scrapy.Spider):
name = "cbr"
allowed_domains = ["careerbuilder.com"]
start_urls = (
         'http://www.careerbuilder.com/browse/category/computer-and-mathematical',
)
    def start_requests(self):
        for i in range(1,2):
            yield Request("http://ip.42.pl/raw", callback=self.parse_init)
        for i in range(1,2):
            yield Request("http://www.careerbuilder.com/jobs-net-developer?page_number="+str(i)+"&sort=date_desc", callback=self.parse) 
        for i in range(1,3):
            yield Request("http://www.careerbuilder.com/jobs-it-manager?page_number="+str(i)+"&sort=date_desc", callback=self.parse)


    def parse_init(self, response):
        self.ip = response.xpath('//body/p/text()').extract()

    def parse(self, response):
        print "enter parse function"
        for sel in response.xpath('//*[@class="job-list"]'):
            item = CareerbuilderItem()
            item['ip'] = self.ip[0]
            item['name'] = sel.xpath('//div//h2[@class="job-title"]/a/text()').extract()[0]
            item['location'] = sel.xpath('//div[@class="columns small-12 medium-3 end"]//h4[@class="job-text"]/text()').extract()[0]
        yield item

1 个答案:

答案 0 :(得分:0)

这(几乎是你的代码,除了你正在抓取的Item对象因为我没有它)似乎正确运行(参见输出)但是使用python 2.7.9,Scrapy 1.0.5和扭曲16.0.0。 你用的是哪个python?

要运行的脚本:

from subprocess import call

call(["scrapy", "crawl", "cbr"])

from scrapy.crawler import CrawlerProcess
from scrapy.utils.project import get_project_settings

process = CrawlerProcess(get_project_settings())
process.crawl('cbr')
process.start()  # the script will block here until the crawling is finished

代码:

from scrapy import Spider, Request


class CbrSpider(Spider):
    name = "cbr"
    allowed_domains = ["careerbuilder.com"]
    start_urls = (
             'http://www.careerbuilder.com/browse/category/computer-and-mathematical',
    )

    def start_requests(self):
        for i in range(1,2):
            yield Request("http://ip.42.pl/raw", callback=self.parse_init)
        for i in range(1,2):
            yield Request("http://www.careerbuilder.com/jobs-net-developer?page_number="+str(i)+"&sort=date_desc", callback=self.parse)
        for i in range(1,3):
            yield Request("http://www.careerbuilder.com/jobs-it-manager?page_number="+str(i)+"&sort=date_desc", callback=self.parse)


    def parse_init(self, response):
        self.ip = response.xpath('//body/p/text()').extract()

    def parse(self, response):
        print "enter parse function"
        for sel in response.xpath('//*[@class="job-list"]'):
            item = {}
            item['ip'] = self.ip[0]
            item['name'] = sel.xpath('//div//h2[@class="job-title"]/a/text()').extract()[0]
            item['location'] = sel.xpath('//div[@class="columns small-12 medium-3 end"]//h4[@class="job-text"]/text()').extract()[0]
        yield item

部分输出:

2016-05-09 13:11:18 [scrapy] INFO: Scrapy 1.0.5 started (bot: crawl_hhgreg)
2016-05-09 13:11:18 [scrapy] INFO: Optional features available: ssl, http11
2016-05-09 13:11:18 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'crawl_hhgreg.spiders', 'SPIDER_MODULES': ['crawl_hhgreg.spiders'], 'BOT_NAME': 'crawl_hhgreg'}
2016-05-09 13:11:18 [scrapy] INFO: Enabled extensions: CloseSpider, TelnetConsole, LogStats, CoreStats, SpiderState
2016-05-09 13:11:18 [scrapy] INFO: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, RedirectMiddleware, CookiesMiddleware, ChunkedTransferMiddleware, DownloaderStats
2016-05-09 13:11:18 [scrapy] INFO: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2016-05-09 13:11:18 [scrapy] INFO: Enabled item pipelines: JsonWriterPipeline
2016-05-09 13:11:18 [scrapy] INFO: Spider opened
2016-05-09 13:11:18 [scrapy] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2016-05-09 13:11:18 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6023
2016-05-09 13:11:20 [scrapy] DEBUG: Crawled (200) <GET http://ip.42.pl/raw> (referer: None)
2016-05-09 13:11:22 [scrapy] DEBUG: Crawled (200) <GET http://www.careerbuilder.com/jobs-net-developer?page_number=1&sort=date_desc> (referer: None)
enter parse function
2016-05-09 13:11:22 [scrapy] DEBUG: Scraped from <200 http://www.careerbuilder.com/jobs-net-developer?page_number=1&sort=date_desc>
{'ip': u'62.38.254.183', 'name': u'Systems Developer (Treasury Management) - 6111 N River Rd', 'location': u'\nRosemont, IL\n'}
2016-05-09 13:11:23 [scrapy] DEBUG: Crawled (200) <GET http://www.careerbuilder.com/jobs-it-manager?page_number=1&sort=date_desc> (referer: None)
enter parse function
2016-05-09 13:11:23 [scrapy] DEBUG: Scraped from <200 http://www.careerbuilder.com/jobs-it-manager?page_number=1&sort=date_desc>
{'ip': u'62.38.254.183', 'name': u'Medical Technologist', 'location': u'\nHonolulu, HI\n'}