粗暴地产生项目并连续请求链接

时间:2019-03-05 06:31:43

标签: python request scrapy yield

我的蜘蛛从此页面https://finviz.com/screener.ashx开始,访问表中的每个链接,以产生另一端的某些物品。这工作得很好。然后,我想让我的蜘蛛访问它最初访问的页面上的链接,从而增加另一层深度:

start_urls > url > url_2

蜘蛛应该访问“ url”,沿途生成一些项,然后访问“ url_2”,并生成更多项,然后从start_url移至下一个URL。

这是我的蜘蛛代码:

import scrapy
from scrapy import Request
from dimstatistics.items import DimstatisticsItem

class StatisticsSpider(scrapy.Spider):
    name = 'statistics'

    def __init__(self):
        self.start_urls = ['https://finviz.com/screener.ashx? v=111&f=ind_stocksonly&r=01']

        npagesscreener = 1000

        for i in range(1, npagesscreener + 1):
            self.start_urls.append("https://finviz.com/screener.ashx? v=111&f=ind_stocksonly&r="+str(i)+"1")


    def parse(self, response):
        for href in response.xpath("//td[contains(@class, 'screener-body-table-nw')]/a/@href"):
            url = "https://www.finviz.com/" + href.extract()
            yield follow.Request(url, callback=self.parse_dir_contents)


    def parse_dir_contents(self, response):
        item = {}

        item['statisticskey'] = response.xpath("//a[contains(@class, 'fullview-ticker')]//text()").extract()[0]
        item['shares_outstanding'] = response.xpath("//table[contains(@class, 'snapshot-table2')]/tr/td/descendant::text()").extract()[9]
        item['shares_float'] = response.xpath("//table[contains(@class, 'snapshot-table2')]/tr/td/descendant::text()").extract()[21]
        item['short_float'] = response.xpath("//table[contains(@class, 'snapshot-table2')]/tr/td/descendant::text()").extract()[33]
        item['short_ratio'] = response.xpath("//table[contains(@class, 'snapshot-table2')]/tr/td/descendant::text()").extract()[45]
        item['institutional_ownership'] = response.xpath("//table[contains(@class, 'snapshot-table2')]/tr/td/descendant::text()").extract()[7]
        item['institutional_transactions'] = response.xpath("//table[contains(@class, 'snapshot-table2')]/tr/td/descendant::text()").extract()[19]
        item['employees'] = response.xpath("//table[contains(@class, 'snapshot-table2')]/tr/td/descendant::text()").extract()[97]
        item['recommendation'] = response.xpath("//table[contains(@class, 'snapshot-table2')]/tr/td/descendant::text()").extract()[133]

        yield item

        url2 = response.xpath("//table[contains(@class, 'fullview-links')]//a/@href").extract()[0]

        yield response.follow(url2, callback=self.parse_dir_stats)


    def parse_dir_stats(self, response):
        item = {}

        item['effective_tax_rate_ttm_company'] = response.xpath("//tr[td[normalize-space()='Effective Tax Rate (TTM)']]/td[2]/text()").extract()
        item['effective_tax_rate_ttm_industry'] = response.xpath("//tr[td[normalize-space()='Effective Tax Rate (TTM)']]/td[3]/text()").extract()
        item['effective_tax_rate_ttm_sector'] = response.xpath("//tr[td[normalize-space()='Effective Tax Rate (TTM)']]/td[4]/text()").extract()
        item['effective_tax_rate_5_yr_avg_company'] = response.xpath("//tr[td[normalize-space()='Effective Tax Rate - 5 Yr. Avg.']]/td[2]/text()").extract()
        item['effective_tax_rate_5_yr_avg_industry'] = response.xpath("//tr[td[normalize-space()='Effective Tax Rate - 5 Yr. Avg.']]/td[3]/text()").extract()
        item['effective_tax_rate_5_yr_avg_sector'] = response.xpath("//tr[td[normalize-space()='Effective Tax Rate - 5 Yr. Avg.']]/td[4]/text()").extract()

        yield item

所有的xpath和链接都是正确的,我似乎根本无法产生任何效果。我感觉这里有一个明显的错误。我第一次尝试使用更精致的蜘蛛。

任何帮助将不胜感激!谢谢!

***编辑2

{'statisticskey': 'AMRB', 'shares_outstanding': '5.97M', 'shares_float': 
'5.08M', 'short_float': '0.04%', 'short_ratio': '0.63', 
'institutional_ownership': '10.50%', 'institutional_transactions': '2.74%', 
'employees': '101', 'recommendation': '2.30'}
2019-03-06 18:45:19 [scrapy.core.scraper] DEBUG: Scraped from <200 
https://www.finviz.com/quote.ashx?t=AMR&ty=c&p=d&b=1>
{'statisticskey': 'AMR', 'shares_outstanding': '154.26M', 'shares_float': 
'89.29M', 'short_float': '13.99%', 'short_ratio': '4.32', 
'institutional_ownership': '0.10%', 'institutional_transactions': '-', 
'employees': '-', 'recommendation': '3.00'}
2019-03-06 18:45:19 [scrapy.core.scraper] DEBUG: Scraped from <200 
https://www.finviz.com/quote.ashx?t=AMD&ty=c&p=d&b=1>
{'statisticskey': 'AMD', 'shares_outstanding': '1.00B', 'shares_float': 
'997.92M', 'short_float': '11.62%', 'short_ratio': '1.27', 
'institutional_ownership': '0.70%', 'institutional_transactions': '-83.83%', 
'employees': '10100', 'recommendation': '2.50'}
2019-03-06 18:45:19 [scrapy.core.scraper] DEBUG: Scraped from <200 
https://www.finviz.com/quote.ashx?t=AMCX&ty=c&p=d&b=1>
{'statisticskey': 'AMCX', 'shares_outstanding': '54.70M', 'shares_float': 
'43.56M', 'short_float': '20.94%', 'short_ratio': '14.54', 
'institutional_ownership': '3.29%', 'institutional_transactions': '0.00%', 
'employees': '1872', 'recommendation': '3.00'}
2019-03-06 18:45:19 [scrapy.core.scraper] DEBUG: Scraped from <200 
https://www.finviz.com/screener.ashx?v=111&f=geo_bermuda>
{'effective_tax_rate_ttm_company': [], 'effective_tax_rate_ttm_industry': 
[], 'effective_tax_rate_ttm_sector': [], 
'effective_tax_rate_5_yr_avg_company': [], 
'effective_tax_rate_5_yr_avg_industry': [], 
'effective_tax_rate_5_yr_avg_sector': []}
2019-03-06 18:45:25 [scrapy.core.scraper] DEBUG: Scraped from <200 
https://www.finviz.com/screener.ashx?v=111&f=geo_china>
{'effective_tax_rate_ttm_company': [], 'effective_tax_rate_ttm_industry': 
[], 'effective_tax_rate_ttm_sector': [], 
'effective_tax_rate_5_yr_avg_company': [], 
'effective_tax_rate_5_yr_avg_industry': [], 
'effective_tax_rate_5_yr_avg_sector': []}

***编辑3

设法使蜘蛛实际到达url2并在那里产生物品。问题在于它很少这样做。在大多数情况下,它重定向到正确的链接却什么也没有,或者似乎根本没有重定向并继续。不太确定为什么这里会出现这种不一致的情况。

2019-03-06 20:11:57 [scrapy.core.scraper] DEBUG: Scraped from <200 
https://www.reuters.com/finance/stocks/financial-highlights/BCACU.A>
{'effective_tax_rate_ttm_company': ['--'], 
'effective_tax_rate_ttm_industry': ['4.63'], 
'effective_tax_rate_ttm_sector': ['20.97'], 
'effective_tax_rate_5_yr_avg_company': ['--'], 
'effective_tax_rate_5_yr_avg_industry': ['3.98'], 
'effective_tax_rate_5_yr_avg_sector': ['20.77']}

另一件事是,我知道我成功地在url2上产生了一些值,尽管它们没有出现在我的CSV输出中。我意识到这可能是出口问题。我将代码更新为当前状态。

1 个答案:

答案 0 :(得分:0)

url2是相对路径,但是scrapy.Request需要完整的URL。

尝试一下:

yield Request(
    response.urljoin(url2),
    callback=self.parse_dir_stats)

或更简单:

yield response.follow(url2, callback=self.parse_dir_stats)