Scrapy只抓取一页

时间:2016-08-21 16:33:37

标签: python web-scraping scrapy

我制作了一个scrapy蜘蛛,我想抓取所有页面,但它只爬到第二页然后停止。 似乎在if next_page:循环内,url只会更改为第二页然后粘在那里。我想我误解了http响应是如何工作的,因为它似乎只抓住起始页面上的下一页链接。

import scrapy

from tutorial.items import TriniCarsItem

class TCS(scrapy.Spider):
    name = "TCS"
    allowed_domains = ["TCS.com"]
    start_urls = [
        "http://www.TCS.com/database/featuredcarsList.php"]

    def parse(self, response):
        for href in response.css("table > tr > td > a::attr('href')"):
            url = response.urljoin(href.extract())
            yield(scrapy.Request(url, callback=self.parse_dir_contents))
        next_page = response.css("body > table > tr > td > font > b > a::attr('href')")
        if next_page:
            url = response.urljoin(next_page[0].extract())
            print("THIS IS THE URL =----------------------------- " + url)
            yield(scrapy.Request(url, self.parse))

    def parse_dir_contents(self, response):
        for sel in response.xpath('//table[@width="543"]/tr/td/table/tr/td[2]/table'):
            item = TCSItem()
            item['id'] = sel.xpath('tr[1]/td[1]//text()').extract()
            item['make'] = sel.xpath('tr[3]/td[2]//text()').extract()
            item['model'] = sel.xpath('tr[4]/td[2]//text()').extract()
            item['year'] = sel.xpath('tr[5]/td[2]//text()').extract()
            item['colour'] = sel.xpath('tr[6]/td[2]//text()').extract()
            item['engine_size'] = sel.xpath('tr[7]/td[2]//text()').extract()
            item['mileage'] = sel.xpath('tr[8]/td[2]//text()').extract()
            item['transmission'] = sel.xpath('tr[9]/td[2]//text()').extract()
            item['features'] = sel.xpath('tr[11]/td[2]//text()').extract()
            item['additional_info'] = sel.xpath('tr[12]/td[2]//text()').extract()
            item['contact_name'] = sel.xpath('tr[14]/td[2]//text()').extract()
            item['contact_phone'] = sel.xpath('tr[15]/td[2]//text()').extract()
            item['contact_email'] = sel.xpath('tr[16]/td[2]//text()').extract()
            item['asking_price'] = sel.xpath('tr[17]/td[2]//text()').extract()
            item['date_added'] = sel.xpath('tr[19]/td[2]//text()').extract()
            item['page_views'] = sel.xpath('tr[20]/td[2]//text()').extract()
            #print(make, model, year, colour, engine_size, mileage, transmission, features, 
            #additional_info, contact_name, contact_phone, contact_email, asking_price, date_added, 
            #page_views)
            yield(item)

1 个答案:

答案 0 :(得分:1)

在第二页上,第一个链接(您选择的链接)是指向上一页的链接。只需按顺序发送任何链接,然后让重复数据删除器取消任何重复项:

    if next_page:
        for i in next_page
            url = response.urljoin(i.extract())
            print("THIS IS THE URL =----------------------------- " + url)
            yield(scrapy.Request(url, self.parse))

P.S。在您的情况下,还要考虑更容易和更大规模并行的方式:

start_urls = [
    "http://www.trinicarsforsale.com/database/featuredcarsList.php?page=%d" % i for i in xrange(1, 460)]

def parse(self, response):
    return self.parse_dir_contents(response):
相关问题