scrapy无法得到正确的反应

时间:2017-01-16 14:57:05

标签: python scrapy

我正试图从http://music.163.com/#/artist?id=16686获取歌曲和歌手数据,但我无法得到正确的答案。

我检查了scrapy shell,当我请求“music.163.com/#/artist?id=16686”时,回复是“music.163.com”。我不知道原因。

以下是日志

C:\Users\lszxw\PycharmProjects\untitled\scrapy\tutorial\tutorial>scrapy shell http://music.163.com/#/artist/album?id=16686
2017-01-16 22:47:03 [scrapy.utils.log] INFO: Scrapy 1.3.0 started (bot: tutorial)
2017-01-16 22:47:03 [scrapy.utils.log] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'tutorial.spiders', 'LOGSTATS_INTERVAL': 0, 'BOT_NAME': 'tutorial', 'DUPEFILTER_CLASS': 'scrapy.dupefilters.BaseDupeFilter', 'SPIDER_MODULES': ['tutorial.spiders'], 'ROBOTSTXT_OBEY': True}
2017-01-16 22:47:03 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
 'scrapy.extensions.telnet.TelnetConsole']
2017-01-16 22:47:03 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware',
 'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
 'scrapy.downloadermiddlewares.retry.RetryMiddleware',
 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
 'scrapy.downloadermiddlewares.stats.DownloaderStats']
2017-01-16 22:47:03 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
 'scrapy.spidermiddlewares.referer.RefererMiddleware',
 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
 'scrapy.spidermiddlewares.depth.DepthMiddleware']
2017-01-16 22:47:03 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2017-01-16 22:47:03 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6023
2017-01-16 22:47:03 [scrapy.core.engine] INFO: Spider opened
2017-01-16 22:47:03 [scrapy.core.engine] DEBUG: Crawled (404) <GET http://music.163.com/robots.txt> (referer: None)
2017-01-16 22:47:03 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://music.163.com/#/artist/album?id=16686> (referer: None)
2017-01-16 22:47:04 [traitlets] DEBUG: Using default logger
2017-01-16 22:47:04 [traitlets] DEBUG: Using default logger
[s] Available Scrapy objects:
[s]   scrapy     scrapy module (contains scrapy.Request, scrapy.Selector, etc)
[s]   crawler    <scrapy.crawler.Crawler object at 0x0000018893DAB9E8>
[s]   item       {}
[s]   request    <GET http://music.163.com/#/artist/album?id=16686>
[s]   response   <200 http://music.163.com/>
[s]   settings   <scrapy.settings.Settings object at 0x0000018893DCEF98>
[s]   spider     <DefaultSpider 'default' at 0x18893fd39b0>
[s] Useful shortcuts:
[s]   fetch(url[, redirect=True]) Fetch URL and update local objects (by default, redirects are followed)
[s]   fetch(req)                  Fetch a scrapy.Request and update local objects
[s]   shelp()           Shell help (print this help)
[s]   view(response)    View response in a browser

以下是我的代码,它包含真实的网址:

import scrapy

class KokiaSpider(scrapy.Spider):
    name = 'kokia'


    def start_requests(self):
        start_urls = ["http://music.163.com/#/artist/album?id=16686&limit=12&offset=0",
                  "http://music.163.com/#/artist/album?id=16686&limit=12&offset=12",
                  "http://music.163.com/#/artist/album?id=16686&limit=12&offset=24",
                  "http://music.163.com/#/artist/album?id=16686&limit=12&offset=36",
                  "http://music.163.com/#/artist/album?id=16686&limit=12&offset=48",
                  "http://music.163.com/#/artist/album?id=16686&limit=12&offset=60",]

        start_urls=["http://music.163.com/#/artist/album?id=16686",
                    ]
        for url in start_urls:
            yield scrapy.Request(url=url, callback=self.parse)

    def parse(self, response):
        #url = 'http://music.163.com/#'
        for item in response.xpath('//*[@id="m-song-module"]/li/p[1]/a/@href'):
           # full_url = url + item.extract()
            full_url = response.urljoin(item.extract())
            self.log('full_url %s' %full_url)
            # yield scrapy.Request(full_url,callback=self.parse_album)

    def parse_album(self,response):
        for item in response.xpath('//table[@class="m-table"]/tbody/tr/td[2]//a/@href'):
            # full_url = url + item.extract()
            full_url = response.urljoin(item.extract())
            self.log('full_url %s' %full_url)
            yield scrapy.Request(full_url,callback=self.parse_song)
    def parse_song(self,response):
        song_name = response.xpath('//div[@class="hd"]/div/em/text()').extract_first()
        singer_name = response.xpath('//p[@class="s-fc4"][1]/span/a/text()').extract_first()
        album_name = response.xpath('//p[@class="s-fc4"][2]/a/text()').extract_first()
        comments_num = response.xpath('//*[@id="cnt_comment_count"]/text()')
        yield{
            "song:":song_name,
            "singer:":singer_name,
            "album:":album_name,
            "comments:":comments_num
        }

2 个答案:

答案 0 :(得分:0)

您的start_urls似乎不正确。如果你检查网络选项卡和页面源,你会注意到专辑/歌曲数据实际上包含在<iframe>标签中,这导致几乎没有#的网址:

"http://music.163.com/#/artist/album?id=16686"
# becomes:
"http://music.163.com/artist/album?id=16686"

之后parse_album中的歌曲xpath不正确。我已经使用了这个:

"//ul[@class='f-hide']/li/a/@href[contains(.,'song')]"

之后一切似乎都在起作用。

答案 1 :(得分:0)

这是Ajax驱动的网址:http://music.163.com/#/artist?id=16686因此#之后的所有部分都未得到妥善处理。尽量使用AjaxCrawlMiddleware,但Scrapy不能很好地使用Ajax URL。