需要使用scrapy提取子页面的内容

时间:2019-06-10 08:05:48

标签: python web-scraping scrapy web-crawler

我对刮板还很陌生,但是已经为我准备了一些简单的刮板。

我试图通过从一个页面获取所有链接并抓取子页面的内容来进入下一个层次。我已经阅读了一些不同的示例和问答,但似乎无法使这段代码对我有用。

import scrapy

from ..items import remoteworkhub_jobs

class remoteworkhub(scrapy.Spider):
    name = 'remoteworkhub'
    allowed_domains = ['www.remoteworkhub.com']
    #start_urls = ['https://jobs.remoteworkhub.com/']
    start_urls = ['https://jobs.remoteworkhub.com']

     # Scrape the individual job urls and pass them to the spider
    def parse(self, response):
        links = response.xpath('//a[@class="jobList-title"]/@href').extract()
        for jobs in links:
            base_url = 'https://jobs.remoteworkhub.com'
            Url = base_url + jobs
            yield scrapy.Request(Url, callback=self.parsejobpage)


    def parsejobpage(self, response):
            #Extracting the content using css selectors
            titles = response.xpath('//h1[@class="u-mv--remove u-textH2"]/text()').extract()
            companys = response.xpath('/html/body/div[4]/div/div/div[1]/div[1]/div[1]/div[2]/div[2]/div/div[1]/strong/a/text()').extract()
            categories = response.xpath('/html/body/div[4]/div/div/div[1]/div[1]/div[1]/div[3]/ul/li/a/text()').extract()
            worktype = response.xpath('/html/body/div[4]/div/div/div[1]/div[1]/div[1]/div[5]/div[2]/span/text()').extract()
            job_decription = response.xpath('//div[@class="job-body"]//text()').extract()

            #titles = response.css('.jobDetail-headerIntro::text').extract()
            #titles = response.xpath('//title').get()
            #votes = response.css('.score.unvoted::text').extract()
            #times = response.css('time::attr(title)').extract()
            #comments = response.css('.comments::text').extract()

            item = remoteworkhub_jobs()
            #item['jobUrl'] = jobUrl
            item['title'] = titles
            #item['company'] = companys
            #item['category'] = categories
            #item['worktype'] = worktype
            #item['job_description'] = job_decription

            #yield or give the scraped info to scrapy
            yield item

1 个答案:

答案 0 :(得分:1)

检查以下实现,该实现应使您能够从该站点解析职位名称及其相关的公司名称。您定义xpath的方式容易出错。但是,我已经对其进行了修改,以便它们可以正确地工作。试一试:

import scrapy

class remoteworkhub(scrapy.Spider):
    name = 'remoteworkhub'
    start_urls = ['https://jobs.remoteworkhub.com']

    def parse(self, response):
        for job_link in response.xpath("//*[contains(@class,'job-listing')]//*[@class='jobList-title']/@href").extract():
            Url = response.urljoin(job_link)
            yield scrapy.Request(Url, callback=self.parsejobpage)

    def parsejobpage(self, response):
        d = {}
        d['title'] = response.xpath("//*[@class='jobDetail-headerIntro']/h1/text()").get()
        d['company'] = response.xpath("//*[@class='jobDetail-headerIntro']//strong//text()").get()
        yield d

如果我使用print而不是yield,我可以在控制台中看到这种输出:

{'title': 'Sr Full Stack Developer, Node/React - Remote', 'company': 'Clevertech'}
{'title': 'Subject Matter Expert, Customer Experience - Remote', 'company': 'Qualtrics'}
{'title': 'Employee Experience Enterprise Account Executive - Academic and Government - Remote', 'company': 'Qualtrics'}
{'title': 'Senior Solutions Consultant, Brand Experience - Remote', 'company': 'Qualtrics'}
{'title': 'Data Analyst - Remote', 'company': 'Railsware'}
{'title': 'Recruitment Manager - Remote', 'company': 'Railsware'}