python cassandra驱动程序与复制相同的插入性能

时间:2015-10-15 16:13:15

标签: python cassandra multiprocessing

我试图与Cassandra一起使用Python异步来查看我是否可以比CQL COPY命令更快地向Cassandra写入记录。

我的python代码如下所示:

from cassandra.cluster import Cluster
from cassandra import ConsistencyLevel
from cassandra.query import SimpleStatement
cluster = Cluster(['1.2.1.4'])

session = cluster.connect('test')

with open('dataImport.txt') as f:
    for line in f:
        query = SimpleStatement (
            "INSERT INTO tstTable (id, accts, info) VALUES (%s) " %(line),
            consistency_level=ConsistencyLevel.ONE)
        session.execute_async (query)

但是它给了我与COPY命令相同的性能...大约2,700行/秒....如果异步更快吗?

我需要在python中使用多线程吗?只是阅读它,但不知道它是如何适应这个......

编辑:

所以我在网上找到了一些我试图修改但却无法完成工作的东西......到目前为止我已经有了这个...而且我将文件分成3个文件到/ Data / toImport / dir:

import multiprocessing
import time
import os
from cassandra.cluster import Cluster
from cassandra import ConsistencyLevel
from cassandra.query import SimpleStatement


cluster = Cluster(['1.2.1.4'])

session = cluster.connect('test')

def mp_worker(inputArg):
        with open(inputArg[0]) as f:
            for line in f:
                query = SimpleStatement (
                    "INSERT INTO CustInfo (cust_id, accts, offers) values (%s)" %(line),
                    consistency_level=ConsistencyLevel.ONE)
                session.execute_async (query)


def mp_handler(inputData, nThreads = 8):
    p = multiprocessing.Pool(nThreads)
    p.map(mp_worker, inputData, chunksize=1)
    p.close()
    p.join()

if __name__ == '__main__':
    temp_in_data = file_list
    start = time.time()
    in_dir = '/Data/toImport/'
    N_Proc = 8
    file_data = [(in_dir) for i in temp_in_data]

    print '----------------------------------Start Working!!!!-----------------------------'
    print 'Number of Processes using: %d' %N_Proc
    mp_handler(file_data, N_Proc)
    end = time.time()
    time_elapsed = end - start
    print '----------------------------------All Done!!!!-----------------------------'
    print "Time elapsed: {} seconds".format(time_elapsed)

但得到此错误:

Traceback (most recent call last):
  File "multiCass.py", line 27, in <module>
    temp_in_data = file_list
NameError: name 'file_list' is not defined

2 个答案:

答案 0 :(得分:3)

这篇文章A Multiprocessing Example for Improved Bulk Data Throughput提供了提高批量数据提取性能所需的所有详细信息。基本上有3种机制,可以根据您的使用情况进行额外的调整。 HW:

  1. 单个进程(在您的示例中就是这种情况)
  2. 多处理单个查询
  3. 多处理并发查询
  4. 批量和并发的大小是您必须自己玩的变量。

答案 1 :(得分:2)

让它像这样工作:

import multiprocessing
import time
import os
from cassandra.cluster import Cluster
from cassandra import ConsistencyLevel
from cassandra.query import SimpleStatement



def mp_worker(inputArg):
        cluster = Cluster(['1.2.1.4'])
        session = cluster.connect('poc')


        with open(inputArg[0]) as f:
            for line in f:
                query = SimpleStatement (
                    "INSERT INTO testTable (cust_id, accts, offers) values (%s)" %(line),
                    consistency_level=ConsistencyLevel.ONE)
                session.execute_async (query)


def mp_handler(inputData, nThreads = 8):
    p = multiprocessing.Pool(nThreads)
    p.map(mp_worker, inputData, chunksize=1)
    p.close()
    p.join()

if __name__ == '__main__':
    temp_in_data = ['/toImport/part-00000', '/toImport/part-00001', '/toImport/part-00002']
    start = time.time()
    N_Proc = 3
    file_data = [(i,) for i in temp_in_data]

    print '----------------------------------Start Working!!!!-----------------------------'
    print 'Number of Processes using: %d' %N_Proc
    mp_handler(file_data, N_Proc)
    end = time.time()
    time_elapsed = end - start
    print '----------------------------------All Done!!!!-----------------------------'
    print "Time elapsed: {} seconds".format(time_elapsed)