如何在scrapy中包装创建start_urls的过程?

时间:2017-10-20 09:07:12

标签: python scrapy

这是我的网络scrapy的简单结构。

import scrapy,urllib.request    
class TestSpider(scrapy.Spider):
    def __init__(self, *args, **kw):
        self.timeout = 10

    name = "quotes"
    allowed_domains = ["finance.yahoo.com"]

    url_nasdaq = "ftp://ftp.nasdaqtrader.com/SymbolDirectory/nasdaqlisted.txt"
    s = urllib.request.urlopen(url_nasdaq).read().decode('ascii')
    s1 = s.split('\r\n')[1:-2]
    namelist = []
    for item in s1:
        if "NASDAQ TEST STOCK" not in item:namelist.append(item)
    s2 = [s.split('|')[0] for s in namelist] 
    s3=[]
    for symbol in s2:
        if  "." not in symbol :  
            s3.append(symbol)

    start_urls = ["https://finance.yahoo.com/quote/"+s+"/financials?p="+s for s in s2]  


    def parse(self, response):
        content = response.body
        target = response.url
        #doing somthing ,omitted code

将其另存为test.py并使用scrapy runspider test.py运行。

现在我想包装所有创建start_urls的代码 我试试这里。

class TestSpider(scrapy.Spider):
    def __init__(self, *args, **kw):
        self.timeout = 10
        url_nasdaq = "ftp://ftp.nasdaqtrader.com/SymbolDirectory/nasdaqlisted.txt"
        s = urllib.request.urlopen(url_nasdaq).read().decode('ascii')
        s1 = s.split('\r\n')[1:-2]
        namelist = []
        for item in s1:
            if "NASDAQ TEST STOCK" not in item : namelist.append(item)
        s2 = [s.split('|')[0] for s in namelist] 
        s3=[]
        for symbol in s2:
            if  "." not in symbol : s3.append(symbol)
        self.start_urls = ["https://finance.yahoo.com/quote/"+s+"/financials?p="+s for s in s3]  

它无法正常工作。

1 个答案:

答案 0 :(得分:1)

这是蜘蛛的start_requests方法。它用于创建初始请求集。以您的示例为基础,它将显示为:

class TestSpider(scrapy.Spider):
    def __init__(self, *args, **kw):
        self.timeout = 10

    def start_requests(self):
        url_nasdaq = "ftp://ftp.nasdaqtrader.com/SymbolDirectory/nasdaqlisted.txt"
        s = urllib.request.urlopen(url_nasdaq).read().decode('ascii')
        s1 = s.split('\r\n')[1:-2]
        namelist = []
        for item in s1:
            if "NASDAQ TEST STOCK" not in item : namelist.append(item)
        s2 = [s.split('|')[0] for s in namelist] 
        s3=[]
        for symbol in s2:
            if  "." not in symbol : s3.append(symbol)
        for s in s3:
            yield scrapy.Request("https://finance.yahoo.com/quote/"+s+"/financials?p="+s, callback=self.parse)