新手:如何只用一个start_urls来抓取多个网页?

时间:2013-06-03 15:09:26

标签: python web-scraping scrapy

首先,我试图从以下方面清除资金代码,例如MGB_U,JAS_U: “http://www.prudential.com.hk/PruServlet?module=fund&purpose=searchHistFund&fundCd=MMFU_U

然后,从例如:

中扣除每个基金的价格

http://www.prudential.com.hk/PruServlet?module=fund&purpose=searchHistFund&fundCd=” + “MGB_U”

http://www.prudential.com.hk/PruServlet?module=fund&purpose=searchHistFund&fundCd=” + “JAS_U”

我的代码有: raise NotImplementedError 但我仍然不知道如何解决它。

from scrapy.spider import BaseSpider
from scrapy.selector import HtmlXPathSelector
from fundPrice.items import FundPriceItem

class PruSpider(BaseSpider):
    name = "prufunds"
    allowed_domains = ["prudential.com.hk"]
    start_urls = ["http://www.prudential.com.hk/PruServlet?module=fund&purpose=searchHistFund&fundCd=MMFU_U"]

    def parse(self, response):
        hxs = HtmlXPathSelector(response)
        funds_U = hxs.select('//table//table//table//table//select[@class="fundDropdown"]//option//@value').extract()
        funds_U = [x for x in funds_U if x != (u"#" and u"MMFU_U")]

        items = []

        for fund_U in funds_U:
            url = "http://www.prudential.com.hk/PruServlet?module=fund&purpose=searchHistFund&fundCd=" + fund_U
            item = FundPriceItem()
            item['fund'] = fund_U
            item['data'] =  hxs.select('//table//table//table//table//td[@class="fundPriceCell1" or @class="fundPriceCell2"]//text()').extract()
            items.append(item)
            return items

1 个答案:

答案 0 :(得分:1)

你应该对循环中的每个fund使用scrapy的Request

from scrapy.http import Request
from scrapy.spider import BaseSpider
from scrapy.selector import HtmlXPathSelector
from fundPrice.items import FundPriceItem


class PruSpider(BaseSpider):
    name = "prufunds"
    allowed_domains = ["prudential.com.hk"]
    start_urls = ["http://www.prudential.com.hk/PruServlet?module=fund&purpose=searchHistFund&fundCd=MMFU_U"]

    def parse(self, response):
        hxs = HtmlXPathSelector(response)
        funds_U = hxs.select('//table//table//table//table//select[@class="fundDropdown"]//option//@value').extract()
        funds_U = [x for x in funds_U if x != (u"#" and u"MMFU_U")]

        for fund_U in funds_U:
            yield Request(
                url="http://www.prudential.com.hk/PruServlet?module=fund&purpose=searchHistFund&fundCd=" + fund_U,
                callback=self.parse_fund,
                meta={'fund': fund_U})

    def parse_fund(self, response):
        hxs = HtmlXPathSelector(response)
        item = FundPriceItem()
        item['fund'] = response.meta['fund']
        item['data'] = hxs.select(
            '//table//table//table//table//td[@class="fundPriceCell1" or @class="fundPriceCell2"]//text()').extract()
        return item

希望有所帮助。