Scrapy履带式蜘蛛不遵循链接

时间:2013-11-16 23:53:26

标签: python web-crawler scrapy

为此,我在Scrapy爬行蜘蛛示例中使用了示例:http://doc.scrapy.org/en/latest/topics/spiders.html

我希望从网页获取链接并按照它们来解析带有统计信息的表格,但不知何故我没有看到任何链接会被抓取并跟随到包含数据的网页。这是我的剧本:

from basketbase.items import BasketbaseItem
from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.selector import HtmlXPathSelector
from scrapy.http import Request



class Basketspider(CrawlSpider):
    name = "basketsp"
    allowed_domains = ["euroleague.net"]
    start_urls = ["http://www.euroleague.net/main"]
    rules = (
        Rule(SgmlLinkExtractor(allow=("results/by-date?seasoncode=E2000")),follow=True),
        Rule(SgmlLinkExtractor(allow=("showgame?gamecode=165&seasoncode=E2000#!boxscore")), callback='parse_item'),
    )


    def parse_item(self, response):
        self.log('Hi, this is an item page! %s' % response.url)
        sel = HtmlXPathSelector(response)
        items=[]
        item = BasketbaseItem()
        item['date'] = sel.select('//div[@class="gs-dates"]/text()').extract() # Game date
        item['time'] = sel.select('//div[@class="gs-dates"]/span[@class="GameScoreTimeContainer"]/text()').extract() # Game time
        item['stage'] = sel.select('//div[@class="gs-dates"]/text()').extract() # Stage of tournament
        item['home'] = sel.select('//div[@class="gs-teams"]/a[@class="localClub"]/text()').extract() #Home team
        item['guest'] = sel.select('//div[@class="gs-teams"]/a[@class="roadClub"]/text()').extract() #Visitor team
        item['referees'] = sel.select('//span[@id="ctl00_ctl00_ctl00_ctl00_maincontainer_maincenter_contentpane_boxscorepane_ctl00_lblReferees"]/text()').extract() #Referees
        item['attendance'] = sel.select('//span[@id="ctl00_ctl00_ctl00_ctl00_maincontainer_maincenter_contentpane_boxscorepane_ctl00_lblAudience"]/text()').extract()
        item['fst'] = sel.select('//table[@id="ctl00_ctl00_ctl00_ctl00_maincontainer_maincenter_contentpane_boxscorepane_ctl00_PartialsStatsByQuarter_dgPartials"]//tr[2]/td[2][@class="AlternatingColumn"]/text()').extract()+sel.select('//table[@id="ctl00_ctl00_ctl00_ctl00_maincontainer_maincenter_contentpane_boxscorepane_ctl00_PartialsStatsByQuarter_dgPartials"]//tr[3]/td[2][@class="AlternatingColumn"]/text()').extract()
        item['snd'] = sel.select('//table[@id="ctl00_ctl00_ctl00_ctl00_maincontainer_maincenter_contentpane_boxscorepane_ctl00_PartialsStatsByQuarter_dgPartials"]//tr[2]/td[3][@class="NormalColumn"]/text()').extract()+sel.select('//table[@id="ctl00_ctl00_ctl00_ctl00_maincontainer_maincenter_contentpane_boxscorepane_ctl00_PartialsStatsByQuarter_dgPartials"]//tr[3]/td[3][@class="NormalColumn"]/text()').extract()
        item['trd'] = sel.select('//table[@id="ctl00_ctl00_ctl00_ctl00_maincontainer_maincenter_contentpane_boxscorepane_ctl00_PartialsStatsByQuarter_dgPartials"]//tr[2]/td[4][@class="AlternatingColumn"]/text()').extract()+sel.select('//table[@id="ctl00_ctl00_ctl00_ctl00_maincontainer_maincenter_contentpane_boxscorepane_ctl00_PartialsStatsByQuarter_dgPartials"]//tr[3]/td[4][@class="AlternatingColumn"]/text()').extract()
        item['tth'] = sel.select('//table[@id="ctl00_ctl00_ctl00_ctl00_maincontainer_maincenter_contentpane_boxscorepane_ctl00_PartialsStatsByQuarter_dgPartials"]//tr[2]/td[5][@class="NormalColumn"]/text()').extract()+sel.select('//table[@id="ctl00_ctl00_ctl00_ctl00_maincontainer_maincenter_contentpane_boxscorepane_ctl00_PartialsStatsByQuarter_dgPartials"]//tr[3]/td[5][@class="NormalColumn"]/text()').extract()
        item['xt1'] = sel.select('//div[@class="gs-dates"]/text()').extract()
        item['xt2'] = sel.select('//div[@class="gs-dates"]/text()').extract()
        item['xt3'] = sel.select('//div[@class="gs-dates"]/text()').extract()
        item['xt4'] = sel.select('//div[@class="gs-dates"]/text()').extract()
        item['game_id'] = sel.select('//span[@id="ctl00_ctl00_ctl00_ctl00_maincontainer_maincenter_contentpane_boxscorepane_ctl00_lblReferees"]/text()').extract() # Game ID construct
        item['arena'] = sel.select('//div[@class="gs-dates"]/text()').extract() #Arena
        item['result'] = sel.select('//span[@class="score"]/text()').extract() #Result
        item['league'] = sel.select('//div[@class="gs-dates"]/text()').extract() #League
        print item['date'],item['time'], item['stage'], item['home'],item['guest'],item['referees'],item['attendance'],item['fst'],item['snd'],item['trd'],item['tth'],item['result']
        items.append(item)    

我在这里得到了终端的回复:

scrapy crawl basketsp
2013-11-17 01:40:15+0200 [scrapy] INFO: Scrapy 0.16.2 started (bot: basketbase)
2013-11-17 01:40:15+0200 [scrapy] DEBUG: Enabled extensions: LogStats, TelnetConsole,   CloseSpider, WebService, CoreStats, SpiderState
2013-11-17 01:40:15+0200 [scrapy] DEBUG: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, RedirectMiddleware, CookiesMiddleware, HttpCompressionMiddleware, ChunkedTransferMiddleware, DownloaderStats
2013-11-17 01:40:15+0200 [scrapy] DEBUG: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2013-11-17 01:40:15+0200 [scrapy] DEBUG: Enabled item pipelines: 
2013-11-17 01:40:15+0200 [basketsp] INFO: Spider opened
2013-11-17 01:40:15+0200 [basketsp] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2013-11-17 01:40:15+0200 [scrapy] DEBUG: Telnet console listening on 0.0.0.0:6023
2013-11-17 01:40:15+0200 [scrapy] DEBUG: Web service listening on 0.0.0.0:6080
2013-11-17 01:40:15+0200 [basketsp] DEBUG: Crawled (200) <GET http://www.euroleague.net/main> (referer: None)
2013-11-17 01:40:15+0200 [basketsp] INFO: Closing spider (finished)
2013-11-17 01:40:15+0200 [basketsp] INFO: Dumping Scrapy stats:
    {'downloader/request_bytes': 228,
     'downloader/request_count': 1,
     'downloader/request_method_count/GET': 1,
     'downloader/response_bytes': 9018,
     'downloader/response_count': 1,
     'downloader/response_status_count/200': 1,
     'finish_reason': 'finished',
     'finish_time': datetime.datetime(2013, 11, 16, 23, 40, 15, 496752),
     'log_count/DEBUG': 7,
     'log_count/INFO': 4,
     'response_received_count': 1,
     'scheduler/dequeued': 1,
     'scheduler/dequeued/memory': 1,
     'scheduler/enqueued': 1,
     'scheduler/enqueued/memory': 1,
     'start_time': datetime.datetime(2013, 11, 16, 23, 40, 15, 229125)}
2013-11-17 01:40:15+0200 [basketsp] INFO: Spider closed (finished)

我在做什么,这里错了?任何想法都会有很大的帮助。我试图让SgmlLinkExtractor()为空,以便遵循所有链接,但我得到了同样的情况。没有任何迹象表明履带式蜘蛛可以工作。

我在Python 2.7.2 +上运行Scrapy版本0.16.2

2 个答案:

答案 0 :(得分:2)

Scrapy错误解释了起始网址的内容类型。

您可以使用scrapy shell验证这一点:

$ scrapy shell 'http://www.euroleague.net/main' 
2013-11-18 16:39:26+0900 [scrapy] INFO: Scrapy 0.21.0 started (bot: scrapybot)
...

AttributeError: 'Response' object has no attribute 'body_as_unicode'

有关缺少的body_as_unicode属性,请参阅my previous answer。我注意到服务器没有设置任何内容类型标题。

CrawlSpider ignores non-html responses,因此不会处理响应,也不会遵循链接。

我建议在github上打开一个问题,因为我认为Scrapy应该能够透明地处理这个案例。

作为一种解决方法,您可以覆盖CrawlSpider parse方法,从传递的响应对象创建HtmlResponse,并将其传递给超类parse方法。

答案 1 :(得分:-1)

将“www”添加到允许的域名。