我试图抓住这个网站(http://www.healthspace.ca/Clients/VIHA/VIHA_Website.nsf/),然而,scrapy似乎无法找到任何网页上的链接。
以下是我得到的输出:
2016-11-17 11:53:01 [scrapy] INFO: Scrapy 1.2.1 started (bot: inspection_grabber)
2016-11-17 11:53:01 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'inspection_grabber.spiders', 'SPIDER_MODULES': ['inspection_grabber.spiders'], 'ROBOTSTXT_OBEY': True, 'BOT_NAME': 'inspection_grabber'}
2016-11-17 11:53:01 [scrapy] INFO: Enabled extensions:
['scrapy.extensions.logstats.LogStats',
'scrapy.extensions.telnet.TelnetConsole',
'scrapy.extensions.corestats.CoreStats']
2016-11-17 11:53:01 [scrapy] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware',
'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
'scrapy.downloadermiddlewares.retry.RetryMiddleware',
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
'scrapy.downloadermiddlewares.chunked.ChunkedTransferMiddleware',
'scrapy.downloadermiddlewares.stats.DownloaderStats']
2016-11-17 11:53:01 [scrapy] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
'scrapy.spidermiddlewares.referer.RefererMiddleware',
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
'scrapy.spidermiddlewares.depth.DepthMiddleware']
2016-11-17 11:53:01 [scrapy] INFO: Enabled item pipelines:
[]
2016-11-17 11:53:01 [scrapy] INFO: Spider opened
2016-11-17 11:53:01 [scrapy] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2016-11-17 11:53:01 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6023
2016-11-17 11:53:01 [scrapy] DEBUG: Crawled (200) <GET http://www.healthspace.ca/robots.txt> (referer: None)
2016-11-17 11:53:01 [scrapy] DEBUG: Crawled (200) <GET http://www.healthspace.ca/Clients/VIHA/VIHA_Website.nsf/> (referer: None)
http://www.healthspace.ca/Clients/VIHA/VIHA_Website.nsf/
2016-11-17 11:53:01 [scrapy] INFO: Closing spider (finished)
2016-11-17 11:53:01 [scrapy] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 472,
'downloader/request_count': 2,
'downloader/request_method_count/GET': 2,
'downloader/response_bytes': 1360,
'downloader/response_count': 2,
'downloader/response_status_count/200': 2,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2016, 11, 17, 19, 53, 1, 822353),
'log_count/DEBUG': 3,
'log_count/INFO': 7,
'response_received_count': 2,
'scheduler/dequeued': 1,
'scheduler/dequeued/memory': 1,
'scheduler/enqueued': 1,
'scheduler/enqueued/memory': 1,
'start_time': datetime.datetime(2016, 11, 17, 19, 53, 1, 522968)}
2016-11-17 11:53:01 [scrapy] INFO: Spider closed (finished)
这是我的代码:
# -*- coding: utf-8 -*-
import scrapy
from scrapy.spiders import CrawlSpider, Rule
from scrapy.linkextractors import LinkExtractor
class FirstSpider(scrapy.Spider):
name = "first"
allowed_domains = ["www.healthspace.ca"]
start_urls = ['http://www.healthspace.ca/Clients/VIHA/VIHA_Website.nsf/']
rules = [
Rule(LinkExtractor(
allow=['.*']),
callback='parse',
follow=True
)
]
def parse(self, response):
print response.url