scrapy python将start_urls从spider转移到管道

时间:2017-09-29 11:34:16

标签: python scrapy

我想将 start_urls 从我的蜘蛛传递到Mysqlpipeline。

我该怎么做?

这是我的 spider.py

的一部分
def __init__(self, *args, **kwargs):
    urls = kwargs.pop('urls', [])
    if urls:
        self.start_urls = urls.split(',')
    self.logger.info(self.start_urls)
    url = "".join(urls)
    self.allowed_domains = [url.split('/')[-1]]
    super(SeekerSpider, self).__init__(*args, **kwargs)

这是我的 pipeline.py

class MySQLPipeline(object):
    def __init__(self):

        ...

        #  get the url from the spiders
        start_url = SeekerSpider.start_urls  # not working    

        url = "".join(start_url).split('/')[-1]
        self.tablename = url.split('.')[0]

更新

这是我尝试的另一种方式,但如果我有100个请求......它会创建表100次......

pipeline.py

class MySQLPipeline(object):
    def __init__(self):
       ...

    def process_item(self, item, spider):
       tbl_name = item['tbl_name']
        general_table = """ CREATE TABLE IF NOT EXISTS CrawledTables
                            (id INT(10) UNSIGNED NOT NULL AUTO_INCREMENT,
                            Name VARCHAR(100) NOT NULL,
                            Date VARCHAR(100) NOT NULL,
                            PRIMARY KEY (id), UNIQUE KEY (NAME))
                            ENGINE=Innodb DEFAULT CHARSET=utf8 """

        insert_table = """ INSERT INTO CrawledTables (Name,Date) VALUES(%s,%s)"""

        self.cursor.execute(general_table)
        crawled_date = datetime.datetime.now().strftime("%y/%m/%d-%H:%M")
        self.cursor.execute(insert_table, (tbl_name,
                                           str(crawled_date)))

        ...

spider.py

def __init__(self, *args, **kwargs):
    urls = kwargs.pop('urls', [])
    if urls:
        self.start_urls = urls.split(',')
    self.logger.info(self.start_urls)
    url = "".join(urls)
    self.allowed_domains = [url.split('/')[-1]]
    super(SeekerSpider, self).__init__(*args, **kwargs)

    self.date = datetime.datetime.now().strftime("%y_%m_%d_%H_%M")
    self.dmn = "".join(self.allowed_domains).replace(".", "_")

    tablename = urls.split('/')[-1]
    table_name = tablename.split('.')[0]
    newname = table_name[:1].upper() + table_name[1:]
    date = datetime.datetime.now().strftime("%y_%m_%d_%H_%M")
    self.tbl_name = newname + "_" + date

def parse_page(self, response):

    item = CrawlerItem()
    item['tbl_name'] = self.tbl_name

    ...

在这张桌子中,我试图只添加一次我用日期抓取的表格...基本上我正在使用 start_urls 并将其传递给 allowed_domain 然后将其传递给 tbl_name (对于mysql表名称)

1 个答案:

答案 0 :(得分:3)

我发现我需要在管道

中创建另一个功能
def open_spider(self, spider):

这将获取您在蜘蛛中拥有的所有参数,并在管道中使用它们