在更进一步的网址内刮

时间:2015-04-21 17:44:30

标签: python web-crawler scrapy

所以我有一个爬虫,可以很好地提取有关演出的信息。但是,在信息中我刮了一个网址,显示有关所列演出的更多信息,例如音乐风格。我如何刮去那个网址并继续刮擦其他所有内容?

这是我的代码。任何帮助真的很感激。

import scrapy # Import required libraries.
from scrapy.selector import HtmlXPathSelector # Allows for path detection in a websites code.
from scrapy.spider import BaseSpider # Used to create a simple spider to extract data.
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor # Needed for the extraction of href links in HTML to crawl further pages.
from scrapy.contrib.spiders import CrawlSpider # Needed to make the crawl spider.
from scrapy.contrib.spiders import Rule # Allows specified rules to affect what the link 
from urlparse import urlparse
import soundcloud
import mysql.connector
import requests
import time
from datetime import datetime

from tutorial.items import TutorialItem

genre = ["Dance",
    "Festivals",
    "Rock/pop"
    ]

class AllGigsSpider(CrawlSpider):
    name = "allGigs" # Name of the Spider. In command promt, when in the correct folder, enter "scrapy crawl Allgigs".
    allowed_domains = ["www.allgigs.co.uk"] # Allowed domains is a String NOT a URL. 
    start_urls = [
        #"http://www.allgigs.co.uk/whats_on/London/clubbing-1.html",
        #"http://www.allgigs.co.uk/whats_on/London/festivals-1.html",
        "http://www.allgigs.co.uk/whats_on/London/tours-65.html"
    ] 

    rules = [
        Rule(SgmlLinkExtractor(restrict_xpaths='//div[@class="more"]'), # Search the start URL's for 
        callback="parse_item", 
        follow=True),
    ]

    def parse_start_url(self, response):#http://stackoverflow.com/questions/15836062/scrapy-crawlspider-doesnt-crawl-the-first-landing-page
        return self.parse_item(response)

    def parse_item(self, response):
        for info in response.xpath('//div[@class="entry vevent"]'):
            item = TutorialItem() # Extract items from the items folder.
            item ['table'] = "London"
            item ['url'] = info.xpath('.//a[@class="url"]/@href').extract()
            print item ['url']
            item ['genres'] = info.xpath('.//li[@class="style"]//text() | ./parent::a[@class="url"]/preceding-sibling::li[@class="style"]//text').extract()
            print item ['genres'] 
            item ['artist'] = info.xpath('.//span[@class="summary"]//text()').extract() # Extract artist information.
            item ['venue'] = info.xpath('.//span[@class="vcard location"]//text()').extract() # Extract artist information.
            item ['borough'] = info.xpath('.//span[@class="adr"]//text()').extract() # Extract artist information.          
            item ['date'] = info.xpath('.//span[@class="dates"]//text()').extract() # Extract date information.
            a, b, c = item["date"][0].split()
            item['dateForm']=(datetime.strptime("{} {} {} {}".format(a,b.rstrip("ndthstr"),c,"2015"),"%a %d %b %Y").strftime("%Y,%m,%d"))
            preview = ''.join(str(s)for s in item['artist'])
            item ['genre'] = info.xpath('.//div[@class="header"]//text() | ./parent::div[@class="rows"]/preceding-sibling::div[@class="header"]//text()').extract()
            client = soundcloud.Client(client_id='401c04a7271e93baee8633483510e263', client_secret='b6a4c7ba613b157fe10e20735f5b58cc', callback='http://localhost:9000/#/callback.html')
            tracks = client.get('/tracks', q = preview, limit=1)
            for track in tracks:
                print track.id
                item ['trackz'] = track.id
                yield item  

a[@class="url"]是我想要进入的。 li[@class="style"]包含我在网址中需要的信息。非常感谢

这是对情况的最新消息。我在这里尝试的代码产生一个断言错误。有点困惑......

    item ['url'] = info.xpath('.//a[@class="url"]/@href').extract()
    item ['url'] = ''.join(str(t) for t in item['url'])
    yield Request (item['url'], callback='continue_item', meta={'item': item})

def countinue_item(self, response):
    item = response.meta.get('item')
    item['genres']=info.xpath('.//li[@class="style"]//text()').extract()
    print item['genres']
    return self.parse_parse_item(response)

我将项目[' url']更改为带有.join函数的字符串。然后在continue_item中我在url内部(或者至少它应该是!)并且返回结果。但是如上所述,还没有正常工作。别想它太远了

1 个答案:

答案 0 :(得分:2)

您需要使用新方法继续抓取它,例如:

from scrapy.http import Request
    ...
    def parse_item(self, response):
        ...
        yield Request(item['url'], callback=self.continue_item, meta={'item': item})

    def continue_item(self, response):
        item = response.meta.get('item')
        ...
        yield item