插入数据库

时间:2018-11-27 03:22:55

标签: python scrapy database-performance

我有一个使用scrapy的工作脚本,该脚本使用管道类将剪贴的项目插入数据库中。但是,这似乎大大减缓了刮擦。我正在使用处理项目方法将每个已刮擦的项目在刮擦时插入数据库中。将报废的项目输出到csv文件中,然后使用存储过程将数据插入数据库中会更快吗?

def process_item(self, item, spider):

    if 'address_line_1' in item:
        sql = """INSERT dbo.PropertyListings (date, url, ad_type, address_line_1, suburb, state, postcode)
        SELECT ?, ?, ?, ?, ?, ?, ?
        WHERE NOT EXISTS
        (   SELECT 1
            FROM dbo.PropertyListings
            WHERE date = ?
            AND address_line_1 = ?
            AND suburb = ?
            AND state = ?
            And postcode = ?
        )
        """
        self.crsr.execute(sql, item['date'], item['url'], item['ad_type'], item['address_line_1'], item['suburb'], \
            item['state'], item['postcode'], item['date'], item['address_line_1'], item['suburb'], item['state'], \
            item['postcode'])
        self.conn.commit()

    else:
        sql = """INSERT dbo.PropertyListings (date, url, ad_type, address_line_1, suburb, state, postcode)
        SELECT ?, ?, ?, ?, ?, ?, ?
        WHERE NOT EXISTS
        (   SELECT 1
            FROM dbo.PropertyListings
            WHERE date = ?
            AND url = ?
        )
        """

        self.crsr.execute(sql, item['date'], item['url'], item['ad_type'], '', item['suburb'], \
            item['state'], item['postcode'], item['date'], item['url'])
        self.conn.commit()

    return item

1 个答案:

答案 0 :(得分:0)

您似乎要在每个数据点插入一个插入。这确实很慢!收集完所有数据后,或至少插入大块数据后,应考虑使用bulk insertions

使用类似这样的东西

def scrape_me_good():
    data = []

    for something in something_else():
        # Process data
        data.append(process_a_something(something)

    bulk_insert(data)

代替这个

def scrape_bad():
    for something in something_else():
        single_insert(process_a_something(something)

请参阅this answer,以了解SQL Server中相当不错的性能