如何废弃网站上的所有数据?

时间:2017-05-13 02:16:18

标签: web-scraping beautifulsoup scrapy

我的代码只给了我44个链接数据而不是102.有人能说我为什么这样提取吗?我很感激你的帮助。我怎样才能正确提取它?

import scrapy
class ProjectItem(scrapy.Item):
    title = scrapy.Field()
    owned = scrapy.Field()
    Revenue2014 = scrapy.Field()
    Revenue2015 = scrapy.Field()
    Website = scrapy.Field()
    Rank = scrapy.Field()
    Employees = scrapy.Field()
    headquarters = scrapy.Field() 
    FoundedYear = scrapy.Field()

类ProjectSpider(scrapy.Spider):

name = "cin100"
allowed_domains = ['cincinnati.com']
start_urls = ['http://www.cincinnati.com/story/money/2016/11/26/see-which-companies-16-deloitte-100/94441104/']

def parse(self, response):

    # get selector for all 100 companies
    sel_companies = response.xpath('//p[contains(.,"click or tap here.")]/following-sibling::p/a')

    # create request for every single company detail page from href
    for sel_companie in sel_companies:
        href = sel_companie.xpath('./@href').extract_first()
        url = response.urljoin(href)
        request = scrapy.Request(url, callback=self.parse_company_detail)
        yield request

def parse_company_detail(self, response):           

    # On detail page create item
    item = ProjectItem()
    # get detail information with specific XPath statements
    # e.g. title is the first paragraph
    item['title'] = response.xpath('//div[@role="main"]/p[1]//text()').extract_first().rsplit('-')[1]
    # e.g. family owned has a label we can select
    item['owned'] = response.xpath('//div[@role="main"]/p[contains(.,"Family owned")]/text()').extract_first()  
item['Revenue2014'] ='$'+response.xpath('//div[@role="main"]/p[contains(.,"2014")]/text()').extract_first().rsplit('$')[1]
item['Revenue2015'] ='$'+response.xpath('//div[@role="main"]/p[contains(.,"$")]/text()').extract_first().rsplit('$')[1]
    item['Website'] = response.xpath('//div[@role="main"]/p/a[contains(.,"www.")]/@href').extract_first()
item['Rank'] = response.xpath('//div[@role="main"]/p[contains(.,"rank")]/text()').extract_first()
item['Employees'] = response.xpath('//div[@role="main"]/p[contains(.,"Employ")]/text()').extract_first()
item['headquarters'] = response.xpath('//div[@role="main"]/p[10]//text()').extract()
item['FoundedYear'] = response.xpath('//div[@role="main"]/p[contains(.,"founded")]/text()').extract()
    # Finally: yield the item
    yield item

2 个答案:

答案 0 :(得分:1)

xpath存在一些潜在问题:

  1. 让xpath查找页面上的文字通常是一个坏主意。文本可以从一分钟更改为下一分钟。布局和html结构的使用寿命更长。

  2. 使用'以下兄弟姐妹'也是最后的xpath功能,很容易受到网站上的细微变化。

  3. 我要做的是:

    alpn-boot.jar

    # iterate all paragraphs within the article: for para in response.xpath("//*[@itemprop='articleBody']/p"): url = para.xpath("./a/@href").extract() # ... etc 顺便给我预期的102。

    您可能需要过滤网址以删除非公司网址,例如标有&#34的网址;点击或点按此处"

答案 1 :(得分:1)

仔细观察scrapy的输出,您会发现在几十个请求之后,它们会被重定向,如下所示:

https://

获得请求的页面显示:我们希望您享受免费访问权限。

因此看起来他们只提供对匿名用户的有限访问权限。您可能需要注册其服务才能完全访问数据。