我正在编写一个Scrapy爬网程序,以从房地产网站https://www.iproperty.com.sg/sale/?page=1
,https://www.iproperty.com.sg/sale/?page=2
等中抓取信息。其想法是,对于每一行,从该行获取信息并向该行上的链接以获取更多信息。处理完该页面上的所有行后,移至下一页并重复:
import scrapy
from scrapy.linkextractors import LinkExtractor
from scrapy.spiders import CrawlSpider, Rule
from property.items import PropertyItem
class IpropCrawlerSpider(CrawlSpider):
name = 'iprop_crawler'
allowed_domains = ['www.iproperty.com.sg']
start_urls = ["https://www.iproperty.com.sg/sale/?page=1"]
rules = (
Rule(LinkExtractor(allow=r'sale\/\?page=[1-9]'),
callback='parse_item', follow=True),
)
def parse_item(self, response):
prop_list_xpath = '//h3[@class="cgiArp"]'
for prop in response.xpath(prop_list_xpath):
item = PropertyItem()
item['name'] = prop.xpath('./a/text()').extract_first()
deep_uri = prop.xpath('./a/@href').extract_first()
deep_url = 'https://www.iproperty.com.sg' + deep_uri
request = scrapy.Request(deep_url, callback=self.parse_per_prop)
request.meta['item'] = item
yield request
def parse_per_prop(self, response):
item = response.meta['item']
item['price'] = response\
.xpath('//div[@class="property-price duzTnm"]/text()')\
.extract_first()
item['address'] = response\
.xpath('//span[@class="property-address sale-default"]/text()')\
.extract_first()
item['property_type'] = response\
.xpath('//div[@class="property-attr-propertyType cXGbLS"]' \
+ '/div[2]/text()')\
.extract_first()
yield item
运行此搜寻器不会导致刮擦任何数据:
2018-11-09 01:53:58 [scrapy.utils.log] INFO: Scrapy 1.5.1 started (bot: property)
2018-11-09 01:53:58 [scrapy.utils.log] INFO: Versions: lxml 3.7.2.0, libxml2 2.9.4, cssselect 1.0.0, parsel 1.5.0, w3lib 1.17.0, Twisted 17.1.0, Python 3.6.1 |Anaconda custom (64-bit)| (default, Mar 22 2017, 19:54:23) - [GCC 4.4.7 20120313 (Red Hat 4.4.7-1)], pyOpenSSL 16.2.0 (OpenSSL 1.0.2p 14 Aug 2018), cryptography 1.7.1, Platform Linux-4.18.16-arch1-1-ARCH-x86_64-with-arch
2018-11-09 01:53:58 [scrapy.crawler] INFO: Overridden settings: {'BOT_NAME': 'property', 'DOWNLOAD_DELAY': 1, 'NEWSPIDER_MODULE': 'property.spiders', 'ROBOTSTXT_OBEY': True, 'SPIDER_MODULES': ['property.spiders']}
2018-11-09 01:53:58 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
'scrapy.extensions.telnet.TelnetConsole',
'scrapy.extensions.memusage.MemoryUsage',
'scrapy.extensions.logstats.LogStats']
2018-11-09 01:53:58 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware',
'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
'scrapy.downloadermiddlewares.retry.RetryMiddleware',
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
'scrapy.downloadermiddlewares.stats.DownloaderStats']
2018-11-09 01:53:58 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
'scrapy.spidermiddlewares.referer.RefererMiddleware',
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
'scrapy.spidermiddlewares.depth.DepthMiddleware']
2018-11-09 01:53:58 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2018-11-09 01:53:58 [scrapy.core.engine] INFO: Spider opened
2018-11-09 01:53:58 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2018-11-09 01:53:58 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6024
2018-11-09 01:53:58 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.iproperty.com.sg/robots.txt> (referer: None)
2018-11-09 01:54:01 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.iproperty.com.sg/sale/?page=1> (referer: None)
2018-11-09 01:54:01 [scrapy.core.engine] INFO: Closing spider (finished)
2018-11-09 01:54:01 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 460,
'downloader/request_count': 2,
'downloader/request_method_count/GET': 2,
'downloader/response_bytes': 154841,
'downloader/response_count': 2,
'downloader/response_status_count/200': 2,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2018, 11, 8, 17, 54, 1, 224281),
'log_count/DEBUG': 3,
'log_count/INFO': 7,
'memusage/max': 47136768,
'memusage/startup': 47136768,
'response_received_count': 2,
'scheduler/dequeued': 1,
'scheduler/dequeued/memory': 1,
'scheduler/enqueued': 1,
'scheduler/enqueued/memory': 1,
'start_time': datetime.datetime(2018, 11, 8, 17, 53, 58, 676635)}
2018-11-09 01:54:01 [scrapy.core.engine] INFO: Spider closed (finished)
如果我将parse_item
更改为parse_start_url
,则只会抓取第一页,但不会跟随以下链接:
2018-11-09 02:11:42 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 6195,
'downloader/request_count': 20,
'downloader/request_method_count/GET': 20,
'downloader/response_bytes': 2433163,
'downloader/response_count': 20,
'downloader/response_status_count/200': 20,
'finish_reason': 'shutdown',
'finish_time': datetime.datetime(2018, 11, 8, 18, 11, 42, 430358),
'item_scraped_count': 18,
'log_count/DEBUG': 39,
'log_count/INFO': 8,
'memusage/max': 47132672,
'memusage/startup': 47132672,
'request_depth_max': 1,
'response_received_count': 20,
'scheduler/dequeued': 19,
'scheduler/dequeued/memory': 19,
'scheduler/enqueued': 21,
'scheduler/enqueued/memory': 21,
'start_time': datetime.datetime(2018, 11, 8, 18, 11, 18, 416991)}
2018-11-09 02:11:42 [scrapy.core.engine] INFO: Spider closed (shutdown)
我想从这个问题上获得启发,以了解为什么我无法访问下一页的链接。
答案 0 :(得分:2)
从Scrapy documentation来看,您似乎将对parse_item
方法的引用传递给规则的callback
参数。但是,根据文档,此回调对提取的链接进行操作。那不是您想要的,因为您的函数需要运行Scrapy Response
。因此,您应该使用process_request
参数。在相关说明中,我更改了您的正则表达式,因为您现在拥有它的方式仅适用于1至9页
rules = (
Rule(LinkExtractor(allow = r'sale\/\?page=[1-9]\d*'),
process_request = 'parse_item', follow = True),
)
顺便说一句,您可能不应该将Request
对象返回给Scrapy,而应该使用scrapy.Item
和ItemLoader
来存储数据。
答案 1 :(得分:0)
因此我发现规则本身存在问题,因此不得不使用xpath选择器。