为什么我在Python Scrapy中的Start_url中没有多个url

时间:2016-05-12 20:35:17

标签: python-2.7 scrapy scrapy-spider

我有不同的抓取方式,我想把它们放在一个数组中并抓取所有这些内容。 我试过这个:

class Myclass(CrawlSpider):
    reload(sys)
    pageNumber = 0
    cmt = 0
    sys.setdefaultencoding('utf8')
    name = 'myclass'
    allowed_domains = ["amazon.fr"]
    firstPage = True
    rules = [
        Rule(LinkExtractor(restrict_xpaths=('//div[@id="mainResults"]//h3[@class="newaps"]/a',)),
             callback='parse_page1', follow=True),
        Rule(LinkExtractor(restrict_xpaths=('//div[@id="bottomBar"]/div[@id="pagn"]/span[@class="pagnLink"]/a',)),
             follow=True),
        Rule(LinkExtractor(restrict_xpaths=(
            '//div[@class="s-item-container"]//a[@class="a-link-normal s-access-detail-page a-text-normal"]',)),
            callback='parse_page', follow=True),
    ]
    arrayCategories = []
    pageCrawled = []
    fileNumbers = 0
    first = 0
    start_urls = ['https://www.amazon.fr/s/ref=sr_nr_p_6_0?fst=as%3Aoff&rh=n%3A197861031%2Cn%3A!197862031%2Cn%3A212130031%2Cn%3A3008171031%2Cp_76%3A211708031%2Cp_6%3AA1X6FK5RDHNB96&bbn=3008171031&ie=UTF8&qid=1463074601&rnid=211045031'
                    ,'https://www.amazon.fr/s/ref=sr_nr_p_6_0?fst=as%3Aoff&rh=n%3A197861031%2Cn%3A!197862031%2Cn%3A212130031%2Cn%3A3008171031%2Cp_76%3A211708031%2Cp_6%3AA1X6FK5RDHNB96&bbn=3008171031&ie=UTF8&qid=1463074601&rnid=211045031',
                    'https://www.amazon.fr/s/ref=sr_nr_n_1/275-0316831-3563928?fst=as%3Aoff&rh=n%3A197861031%2Cn%3A%21197862031%2Cn%3A212130031%2Cn%3A3008171031%2Cp_76%3A211708031%2Cp_6%3AA1X6FK5RDHNB96%2Cn%3A212136031&bbn=3008171031&ie=UTF8&qid=1463075247&rnid=3008171031',
                    ]
    def __init__(self, idcrawl=None, iddrive=None, idrobot=None, proxy=None, *args, **kwargs):
        super(Myclass, self).__init__(*args, **kwargs)

    def start_requests(self):
        for i in range (0, len(self.start_urls)):
            yield Request(self.start_urls[i], callback=self.parse)

    def parse(self, response):
        yield Request(response.url, callback = self.parse_produit)
        hxs = HtmlXPathSelector(response)

        try:
            nextPageLink = hxs.select("//a[@id='pagnNextLink']/@href").extract()[0]
            nextPageLink = urlparse.urljoin(response.url, nextPageLink)
            self.log('\nGoing to next search page: ' + nextPageLink + '\n', log.DEBUG)
            yield Request(nextPageLink, callback=self.parse)
        except:
            self.log('Whole category parsed: ', log.DEBUG)

    def parse_produit(self,response):
        print self.pageNumber
        body = response.css('body').extract_first()
        hxs = HtmlXPathSelector(response)
        body = response.css('body').extract_first()
        f = io.open('./amazon/page%s' % str(self.pageNumber), 'w+', encoding='utf-8')
        f.write(body)
        f.close()
        self.pageNumber = self.pageNumber + 1

我有两个问题, 第一个,我无法抓取3个网址, 第二个我不能打电话给 parse_produit

我的代码出了什么问题?为什么我的consol中没有“print self.pageNumber”的结果?

0 个答案:

没有答案