如何使用Scrapy抓取新链接

时间:2018-10-12 23:17:59

标签: python web-scraping scrapy

我最近开始使用Scrapy,所以我不太熟练,所以这确实是一个新手问题。

我正在刮擦一些惯例惯例,已经刮擦了名称和摊位号,但是我还想要来自公司的链接,它们位于新窗口内,我已经找到并存储了来自锚标签的链接,但是我不知道如何抓取这些新链接,任何帮助或指导都将很可爱

import scrapy

class ConventionSpider(scrapy.Spider):
    name = 'convention'
    allowed_domains = ['events.jspargo.com/ASCB18/Public/Exhibitors.aspx?sortMenu=102003']
    start_urls = ['https://events.jspargo.com/ASCB18/Public/Exhibitors.aspx?sortMenu=102003']

    def parse(self, response):
        name = response.xpath('//*[@class="companyName"]')
        number = response.xpath('//*[@class="boothLabel"]')
        link = response.xpath('//*[@class="companyName"]')
        for row, row1, row2 in zip(name, number, link):
            company = row.xpath('.//*[@class="exhibitorName"]/text()').extract_first()
            booth_num = row1.xpath('.//*[@class="boothLabel aa-mapIt"]/text()').extract_first()
            url = row2.xpath('.//a/@href').extract_first()

            yield {'Company': company,'Booth Number': booth_num}

3 个答案:

答案 0 :(得分:4)

请参见https://github.com/NilanshBansal/Craigslist_Scrapy/blob/master/craigslist/spiders/jobs.py

import scrapy
from scrapy import Request

class ConventionSpider(scrapy.Spider):
name = 'convention'
# allowed_domains = ['events.jspargo.com/ASCB18/Public/Exhibitors.aspx?sortMenu=102003']
start_urls = ['https://events.jspargo.com/ASCB18/Public/Exhibitors.aspx?sortMenu=102003']

def parse(self, response):
    name = response.xpath('//*[@class="companyName"]')
    number = response.xpath('//*[@class="boothLabel"]')
    link = response.xpath('//*[@class="companyName"]')
    for row, row1, row2 in zip(name, number, link):
        company = row.xpath('.//*[@class="exhibitorName"]/text()').extract_first()
        booth_num = row1.xpath('.//*[@class="boothLabel aa-mapIt"]/text()').extract_first()
        url = row2.xpath('.//a/@href').extract_first()

        yield Request(url,callback=self.parse_page,meta={'Url':url,'Company': company,'Booth_Number': booth_num)

def parse_page(self,response):
    company = response.meta.get('Company')
    booth_num = response.meta.get('Booth Number')
    website = response.xpath('//a[@class="aa-BoothContactUrl"]/text()').extract_first()

    yield {'Company': company,'Booth Number': booth_num, 'Website': website}

编辑: 注释“ allowed_domains”行以使搜寻器也可以在其他域上工作。

https://stackoverflow.com/a/52792350上回复您的代码

答案 1 :(得分:1)

更简单的方法是将scrapy.spiders.CrawlSpider类子类化,并指定rule属性

from scrapy.spiders import CrawlSpider, Rule
from scrapy.linkextractors import LinkExtractor

class ConventionSpider(CrawlSpider):
    name = 'convention'
    allowed_domains = ['events.jspargo.com/ASCB18/Public/Exhibitors.aspx?sortMenu=102003']
    start_urls = ['https://events.jspargo.com/ASCB18/Public/Exhibitors.aspx?sortMenu=102003']

    rules = (
    Rule(LinkExtractor(allow=('', ), # allow all links that match a given regex
        deny=('')), # deny all links that match given regex
        callback='parse_item', # function that gets called for each extracted link
        follow=True),
    )

    def parse_item(self, response):
        name = response.xpath('//*[@class="companyName"]')
        number = response.xpath('//*[@class="boothLabel"]')
        link = response.xpath('//*[@class="companyName"]')
        for row, row1, row2 in zip(name, number, link):
            company = row.xpath('.//*[@class="exhibitorName"]/text()').extract_first()
            booth_num = row1.xpath('.//*[@class="boothLabel aa-mapIt"]/text()').extract_first()
            # url = row2.xpath('.//a/@href').extract_first()
            # No need to parse links because we are using CrawlSpider

            yield {'Company': company,'Booth Number': booth_num}

请确保不要将parse用作回调,因为scrapy.spiders.CrawlSpider使用parse方法来实现其逻辑。

答案 2 :(得分:0)

您的代码中存在类函数parse_page的缩进问题,您误将其命名为“ parse”而不是“ parse_page”。这可能是您的代码无法正常工作的原因。修改后的代码如下所示,对我来说效果很好:

import scrapy
from scrapy import Request

class ConventionSpider(scrapy.Spider):
    name = 'Convention'
    allowed_domains = ['events.jspargo.com/ASCB18/Public/Exhibitors.aspx?sortMenu=102003']
    start_urls = ['https://events.jspargo.com/ASCB18/Public/Exhibitors.aspx?sortMenu=102003']

    def parse(self, response):
        name = response.xpath('//*[@class="companyName"]')
        number = response.xpath('//*[@class="boothLabel"]')
        link = response.xpath('//*[@class="companyName"]')
        for row, row1, row2 in zip(name, number, link):
            company = row.xpath('.//*[@class="exhibitorName"]/text()').extract_first(),
            booth_num = row1.xpath('.//*[@class="boothLabel aa-mapIt"]/text()').extract_first()

            next_page_url = row2.xpath('.//a/@href').extract_first()
            next_page_url = response.urljoin(next_page_url)
            yield Request(next_page_url, callback=self.parse_page, meta={'Company': company, 'Booth Number': booth_num}, dont_filter=True)

    def parse_page(self, response):
        company = response.meta.get('Company')
        booth_num = response.meta.get('Booth Number')
        website = response.xpath('//a[@class="aa-BoothContactUrl"]/text()').extract_first()
        yield {'Company': company, 'Booth Number': booth_num, 'Website': website}