Scrapy蜘蛛不爬行

时间:2015-11-07 14:22:31

标签: python web-scraping scrapy scrapy-spider

我正在尝试测试scrapy CrawlSpider,但我不知道为什么它不会爬行。它应该做的是只爬取维基百科的数学页面一个深度级别并返回每个已爬网页面的标题。我错过了什么?非常感谢帮助!

from scrapy.spiders import CrawlSpider, Rule
from scrapy.linkextractors import LinkExtractor
from scrapy.selector import Selector
from Beurs.items import WikiItem

class WikiSpider(CrawlSpider):
    name = 'WikiSpider'
    allowed_domains = ['wikipedia.org']
    start_urls = ["http://en.wikipedia.org/wiki/Mathematics"]

    Rules = (
        Rule(LinkExtractor(restrict_xpaths=('//div[@class="mw-body"]//a/@href'))),
        Rule(LinkExtractor( allow=("http://en.wikipedia.org/wiki/",)),     callback='parse_item', follow=True),        
        )


def parse_item(self, response):
    sel = Selector(response)  
    rows = sel.xpath('//span[@class="innhold"]/table/tr')
    items = []

        for row in rows[1:]:
            item = WikiItem()
            item['agent'] = row.xpath('./td[1]/a/text()|./td[1]/text()').extract()
            item['org'] = row.xpath('./td[2]/text()').extract()
            item['link'] = row.xpath('./td[1]/a/@href').extract()
            item['produkt'] = row.xpath('./td[3]/text()').extract()
        items.append(item)
        return items

设置:

BOT_NAME = 'Beurs'

SPIDER_MODULES = ['Beurs.spiders']
NEWSPIDER_MODULE = 'Beurs.spiders'
DOWNLOAD_HANDLERS = {
  's3': None,
}
DEPTH_LIMIT = 1

和日志:

C:\Users\Jan Willem\Anaconda\Beurs>scrapy crawl BeursSpider
2015-11-07 15:14:36 [scrapy] INFO: Scrapy 1.0.3 started (bot: Beurs)
2015-11-07 15:14:36 [scrapy] INFO: Optional features available: ssl, http11,    boto
2015-11-07 15:14:36 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'Beurs.spiders', 'SPIDER_MODULES': ['Beurs.spiders'], 'DEPTH_LIMIT': 1,    'BOT_NAME': 'Beurs'}
2015-11-07 15:14:36 [scrapy] INFO: Enabled extensions: CloseSpider, TelnetConsole, LogStats, CoreStats, SpiderState
2015-11-07 15:14:36 [scrapy] INFO: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, RedirectMiddleware, CookiesMiddleware, ChunkedTransferMiddleware, DownloaderStats
2015-11-07 15:14:36 [scrapy] INFO: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2015-11-07 15:14:36 [scrapy] INFO: Enabled item pipelines:
2015-11-07 15:14:36 [scrapy] INFO: Spider opened
2015-11-07 15:14:36 [scrapy] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2015-11-07 15:14:36 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6023
2015-11-07 15:14:36 [scrapy] DEBUG: Redirecting (301) to <GET https://en.wikipedia.org/wiki/Mathematics> from <GET http://en.wikipedia.org/wiki/Mathematics>
2015-11-07 15:14:37 [scrapy] DEBUG: Crawled (200) <GET https://en.wikipedia.org/wiki/Mathematics> (referer: None)
2015-11-07 15:14:37 [scrapy] INFO: Closing spider (finished)
2015-11-07 15:14:37 [scrapy] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 530,
 'downloader/request_count': 2,
 'downloader/request_method_count/GET': 2,
 'downloader/response_bytes': 60393,
 'downloader/response_count': 2,
 'downloader/response_status_count/200': 1,
 'downloader/response_status_count/301': 1,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2015, 11, 7, 14, 14, 37, 274000),
 'log_count/DEBUG': 3,
 'log_count/INFO': 7,
 'response_received_count': 1,
 'scheduler/dequeued': 2,
 'scheduler/dequeued/memory': 2,
 'scheduler/enqueued': 2,
 'scheduler/enqueued/memory': 2,
 'start_time': datetime.datetime(2015, 11, 7, 14, 14, 36, 852000)}
2015-11-07 15:14:37 [scrapy] INFO: Spider closed (finished)

1 个答案:

答案 0 :(得分:0)

所以基本上你的正则表达式不太正确,你的Xpath需要一些调整。我认为下面的代码符合您的要求,所以请尝试一下,如果您需要更多帮助,请告诉我们:

def parse_item(self, response):
    sel = Selector(response)
    rows = sel.xpath('//span[@class="innhold"]/table/tr')
    items = []

    for row in rows[1:]:
        item = SasItem()
        item['agent'] = row.xpath('./td[1]/a/text()|./td[1]/text()').extract()
        item['org'] = row.xpath('./td[2]/text()').extract()
        item['link'] = row.xpath('./td[1]/a/@href').extract()
        item['produkt'] = row.xpath('./td[3]/text()').extract()
        items.append(item)
    return items