针对JSON响应的Scrapy蜘蛛

时间:2014-01-05 10:38:17

标签: json web-crawler scrapy

我正在尝试编写一个爬行以下JSON响应的蜘蛛: http://gdata.youtube.com/feeds/api/standardfeeds/UK/most_popular?v=2&alt=json

如果我想要抓取视频的所有标题,蜘蛛会怎样?我所有的蜘蛛都不行。

from scrapy.spider import BaseSpider
import json
from youtube.items import YoutubeItem
class MySpider(BaseSpider):
    name = "youtubecrawler"
    allowed_domains = ["gdata.youtube.com"]
    start_urls = ['http://www.gdata.youtube.com/feeds/api/standardfeeds/DE/most_popular?v=2&alt=json']

    def parse(self, response):
        items []
    jsonresponse = json.loads(response)
    for video in jsonresponse["feed"]["entry"]:
        item = YoutubeItem()
        print jsonresponse
        print video["media$group"]["yt$videoid"]["$t"]
        print video["media$group"]["media$description"]["$t"]
        item ["title"] = video["title"]["$t"]
        print video["author"][0]["name"]["$t"]
        print video["category"][1]["term"]
        items.append(item)
    return items

我总是得到以下错误:

2014-01-05 16:55:21+0100 [youtubecrawler] ERROR: Spider error processing <GET http://gdata.youtube.com/feeds/api/standardfeeds/DE/most_popular?v=2&alt=json>
        Traceback (most recent call last):
          File "/usr/local/lib/python2.7/dist-packages/twisted/internet/base.py", line 1201, in mainLoop
            self.runUntilCurrent()
          File "/usr/local/lib/python2.7/dist-packages/twisted/internet/base.py", line 824, in runUntilCurrent
            call.func(*call.args, **call.kw)
          File "/usr/local/lib/python2.7/dist-packages/twisted/internet/defer.py", line 382, in callback
            self._startRunCallbacks(result)
          File "/usr/local/lib/python2.7/dist-packages/twisted/internet/defer.py", line 490, in _startRunCallbacks
            self._runCallbacks()
        --- <exception caught here> ---
          File "/usr/local/lib/python2.7/dist-packages/twisted/internet/defer.py", line 577, in _runCallbacks
            current.result = callback(current.result, *args, **kw)
          File "/home/bxxxx/svn/ba_txxxxx/scrapy/youtube/spiders/test.py", line 15, in parse
            jsonresponse = json.loads(response)
          File "/usr/lib/python2.7/json/__init__.py", line 326, in loads
            return _default_decoder.decode(s)
          File "/usr/lib/python2.7/json/decoder.py", line 365, in decode
            obj, end = self.raw_decode(s, idx=_w(s, 0).end())
        exceptions.TypeError: expected string or buffer

1 个答案:

答案 0 :(得分:4)

在您的代码中发现了两个问题:

  1. 启动网址无法访问,我从中取出www
  2. json.loads(response)更改为json.loads(response.body_as_unicode())
  3. 这对我很有用:

    class MySpider(BaseSpider):
        name = "youtubecrawler"
        allowed_domains = ["gdata.youtube.com"]
        start_urls = ['http://gdata.youtube.com/feeds/api/standardfeeds/DE/most_popular?v=2&alt=json']
    
        def parse(self, response):
            items = []
            jsonresponse = json.loads(response.body_as_unicode())
            for video in jsonresponse["feed"]["entry"]:
                item = YoutubeItem()
                print video["media$group"]["yt$videoid"]["$t"]
                print video["media$group"]["media$description"]["$t"]
                item ["title"] = video["title"]["$t"]
                print video["author"][0]["name"]["$t"]
                print video["category"][1]["term"]
                items.append(item)
            return items