Scrapy输出[到我的.json文件中

时间:2015-03-30 14:09:58

标签: python json scrapy scrapy-spider

这里有一个真正的Scrapy和Python noob,所以请耐心等待任何愚蠢的错误。我试图写蜘蛛来递归抓取新闻网站并返回文章的标题,日期和第一段。我设法抓取一个页面上的一个项目,但是当我尝试扩展时,一切都出错了。

我的蜘蛛:

import scrapy
    from scrapy.contrib.spiders import CrawlSpider, Rule
    from scrapy.selector import Selector
    from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
    from basic.items import BasicItem

    class BasicSpiderSpider(CrawlSpider):
        name = "basic_spider"
        allowed_domains = ["news24.com/"]
        start_urls = (
        'http://www.news24.com/SouthAfrica/News/56-children-hospitalised-for-food-poisoning-20150328',
        )

        rules = (Rule (SgmlLinkExtractor(allow=("", ))
        , callback="parse_items", follow= True),
        )
        def parse_items(self, response):
            hxs = Selector(response)
            titles = hxs.xpath('//*[@id="aspnetForm"]')
            items = []
            item = BasicItem()
            item['Headline'] = titles.xpath('//*[@id="article_special"]//h1/text()').extract()
            item["Article"] = titles.xpath('//*[@id="article-body"]/p[1]/text()').extract()
            item["Date"] = titles.xpath('//*[@id="spnDate"]/text()').extract()
            items.append(item)
            return items

我仍然遇到同样的问题,但是注意到有一个" ["每次我尝试运行蜘蛛,试图弄清楚问题是什么我运行以下命令:

c:\ Scrapy Spiders \ basic> scrapy parse --spider = basic_spider -c parse_items -d 2 -v http://www.news24.com/SouthAfrica/News/56-children-hospitalised-for-food-poisoning-20150328

给出了以下输出:

2015-03-30 15:28:21+0200 [scrapy] INFO: Scrapy 0.24.5 started (bot: basic)
2015-03-30 15:28:21+0200 [scrapy] INFO: Optional features available: ssl, http11
2015-03-30 15:28:21+0200 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'basic.spiders', 'SPIDER_MODULES': ['basic.spiders'], 'DEPTH_LIMIT': 1, 'DOW
NLOAD_DELAY': 2, 'BOT_NAME': 'basic'}
2015-03-30 15:28:21+0200 [scrapy] INFO: Enabled extensions: LogStats, TelnetConsole, CloseSpider, WebService, CoreStats, SpiderState
2015-03-30 15:28:21+0200 [scrapy] INFO: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, D
efaultHeadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, RedirectMiddleware, CookiesMiddleware, ChunkedTransferMiddleware, DownloaderStats
2015-03-30 15:28:21+0200 [scrapy] INFO: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddl
eware
2015-03-30 15:28:21+0200 [scrapy] INFO: Enabled item pipelines:
2015-03-30 15:28:21+0200 [basic_spider] INFO: Spider opened
2015-03-30 15:28:21+0200 [basic_spider] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2015-03-30 15:28:21+0200 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6023
2015-03-30 15:28:21+0200 [scrapy] DEBUG: Web service listening on 127.0.0.1:6080
2015-03-30 15:28:22+0200 [basic_spider] DEBUG: Crawled (200) <GET http://www.news24.com/SouthAfrica/News/56-children-hospitalised-for-food-poisoning-20150328>
 (referer: None)
2015-03-30 15:28:22+0200 [basic_spider] INFO: Closing spider (finished)
2015-03-30 15:28:22+0200 [basic_spider] INFO: Dumping Scrapy stats:
        {'downloader/request_bytes': 282,
         'downloader/request_count': 1,
         'downloader/request_method_count/GET': 1,
         'downloader/response_bytes': 145301,
         'downloader/response_count': 1,
         'downloader/response_status_count/200': 1,
         'finish_reason': 'finished',
         'finish_time': datetime.datetime(2015, 3, 30, 13, 28, 22, 177000),
         'log_count/DEBUG': 3,
         'log_count/INFO': 7,
         'response_received_count': 1,
         'scheduler/dequeued': 1,
         'scheduler/dequeued/memory': 1,
         'scheduler/enqueued': 1,
         'scheduler/enqueued/memory': 1,
         'start_time': datetime.datetime(2015, 3, 30, 13, 28, 21, 878000)}
2015-03-30 15:28:22+0200 [basic_spider] INFO: Spider closed (finished)

>>> DEPTH LEVEL: 1 <<<
# Scraped Items  ------------------------------------------------------------
[{'Article': [u'Johannesburg - Fifty-six children were taken to\nPietermaritzburg hospitals after showing signs of food poisoning while at\nschool, KwaZulu-Na
tal emergency services said on Friday.'],
  'Date': [u'2015-03-28 07:30'],
  'Headline': [u'56 children hospitalised for food poisoning']}]
# Requests  -----------------------------------------------------------------
[]

因此,我可以看到项目正在被删除,但是没有可用的项目数据放入json文件中。这就是我如何运行scrapy:

scrapy crawl basic_spider -o test.json

我一直在查看最后一行(返回项目),因为将其更改为yield或print会在解析中没有删除任何项目。

1 个答案:

答案 0 :(得分:2)

这通常意味着什么都没有被删除,没有提取任何项目

在您的情况下,请修复您的allowed_domains设置:

allowed_domains = ["news24.com"]

除此之外,只需要从完美主义者那里清理一下:

from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors import LinkExtractor


class BasicSpiderSpider(CrawlSpider):
    name = "basic_spider"
    allowed_domains = ["news24.com"]
    start_urls = [
        'http://www.news24.com/SouthAfrica/News/56-children-hospitalised-for-food-poisoning-20150328',
    ]

    rules = [
        Rule(LinkExtractor(), callback="parse_items", follow=True),
    ]

    def parse_items(self, response):
        for title in response.xpath('//*[@id="aspnetForm"]'):
            item = BasicItem()
            item['Headline'] = title.xpath('//*[@id="article_special"]//h1/text()').extract()
            item["Article"] = title.xpath('//*[@id="article-body"]/p[1]/text()').extract()
            item["Date"] = title.xpath('//*[@id="spnDate"]/text()').extract()
            yield item