Scrapy不返回任何数据

时间:2016-04-08 12:29:50

标签: python python-2.7 scrapy scrapy-spider

我试图抓住这个页面:

http://www.homeimprovementpages.com.au/connect/hypowerelectrical/service/261890

我使用了这段代码:

import scrapy


class HipSpider(scrapy.Spider):
    name = "hip"
    allowed_domains = ["homeimprovementpages.com.au"]
    start_urls = [
        "http://www.homeimprovementpages.com.au/connect/protecelectricalservices/service/163729",
    ]

    def parse(self, response):
        item = HomeimprovementItem()
        item['name'] = response.xpath('//h2[@class="media-heading text-strong"]/text()').extract()
        item['contact'] = response.xpath('//div/span[.="Contact Name:"]/following-sibling::div[1]/text()').extract()
        item['phone'] = response.xpath('//div/span[.="Phone:"]/following-sibling::div[1]/text()').extract()
        yield item

结果是:

C:\Python27\homeimprovement>scrapy crawl hip -o h.csv
2016-04-08 17:49:33 [scrapy] INFO: Scrapy 1.0.5 started (bot: homeimprovement)
2016-04-08 17:49:33 [scrapy] INFO: Optional features available: ssl, http11
2016-04-08 17:49:33 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'ho
meimprovement.spiders', 'FEED_FORMAT': 'csv', 'SPIDER_MODULES': ['homeimprovemen
t.spiders'], 'FEED_URI': 'h.csv', 'BOT_NAME': 'homeimprovement'}
2016-04-08 17:49:34 [scrapy] INFO: Enabled extensions: CloseSpider, FeedExporter
, TelnetConsole, LogStats, CoreStats, SpiderState
2016-04-08 17:49:34 [scrapy] INFO: Enabled downloader middlewares: HttpAuthMiddl
eware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultH
eadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, RedirectMidd
leware, CookiesMiddleware, ChunkedTransferMiddleware, DownloaderStats
2016-04-08 17:49:34 [scrapy] INFO: Enabled spider middlewares: HttpErrorMiddlewa
re, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2016-04-08 17:49:34 [scrapy] INFO: Enabled item pipelines:
2016-04-08 17:49:34 [scrapy] INFO: Spider opened
2016-04-08 17:49:34 [scrapy] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 i
tems (at 0 items/min)
2016-04-08 17:49:34 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6023
2016-04-08 17:49:34 [scrapy] DEBUG: Crawled (403) <GET http://www.homeimprovemen
tpages.com.au/connect/protecelectricalservices/service/163729> (referer: None)
2016-04-08 17:49:34 [scrapy] DEBUG: Ignoring response <403 http://www.homeimprov
ementpages.com.au/connect/protecelectricalservices/service/163729>: HTTP status
code is not handled or not allowed
2016-04-08 17:49:34 [scrapy] INFO: Closing spider (finished)
2016-04-08 17:49:34 [scrapy] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 276,
 'downloader/request_count': 1,
 'downloader/request_method_count/GET': 1,
 'downloader/response_bytes': 2488,
 'downloader/response_count': 1,
 'downloader/response_status_count/403': 1,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2016, 4, 8, 12, 19, 34, 946000),
 'log_count/DEBUG': 3,
 'log_count/INFO': 7,
 'response_received_count': 1,
 'scheduler/dequeued': 1,
 'scheduler/dequeued/memory': 1,
 'scheduler/enqueued': 1,
 'scheduler/enqueued/memory': 1,
 'start_time': datetime.datetime(2016, 4, 8, 12, 19, 34, 537000)}
2016-04-08 17:49:34 [scrapy] INFO: Spider closed (finished)

在蜘蛛文件夹中创建了一个csv,它是空的。我不明白出了什么问题。我希望有人可以指导我。

2 个答案:

答案 0 :(得分:1)

enter image description here

页面http://www.homeimprovementpages.com.au/connect/hypowerelectrical/service/261890有保护。

所有选择器都返回一个空数组。

onQueryTextChange (String newText)

答案 1 :(得分:0)

这是由于在您的日志中可以看到的禁止错误(403)而发生的。 在请求这些页面时,您必须添加自定义用户代理标头。

A library that lets you add fake user agent headers