所以这是我的剧本:
import glob,os, csv
from sqlalchemy import *
count = 0
served_imsi = []
served_imei = []
served_msisdn = []
location_area_code = []
routing_area = []
cell_identity = []
service_area_code = []
s_charging_characteristics = []
plmn_id = []
path = '/home/cneps/cdr/*.cdr'
for file in glob.glob(path):
f = open(file)
for lines in f:
served_imsi.append(lines[17:17+16])
served_imei.append(lines[47:47+16])
served_msisdn.append(lines[65:65+18])
sgsn_address.append(lines[83:83+32])
ggsn_address.append(lines[115:115+32])
charging_id.append(lines[147:147+10])
apn_network.append(lines[157:157+63])
location_area_code.append(lines[296:296+4])
routing_area.append(lines[300:300+2])
cell_identity.append(lines[302:302+4])
service_area_code.append(lines[306:306+4])
s_charging_characteristics.append(lines[325:325+2])
plmn_id.append(lines[327:327+6])
db = create_engine('sqlite:///TIM_CDR.db',echo=True)
metadata = MetaData(db)
CDR1 = Table('CDR1', metadata, autoload=False)
i = CDR1.insert()
while count < len(served_imei):
i.execute(Served_IMSI=served_imsi[count], Served_IMEI=served_imei[count], Served_MSISDN=served_msisdn[count], SGSN_Address=sgsn_address[count], GGSN_Address=ggsn_address[count], Charging_ID=charging_id[count], APN_Network=apn_network[count], LAC=location_area_code[count], RAC=routing_area[count], Cell_Identity=cell_identity[count], Service_Area_Code=service_area_code[count], S_Charging_Characteristics=s_charging_characteristics[count], PLMN_ID=plmn_id[count])
count += 1
完成这项工作需要花费很多,因为我插入数据库的这些数据就像100k行一样。
这需要30分钟才能完成。
我已经读过它了,我知道我可能应该使用交易,但我真的不知道怎么做。
任何人都可以在我的代码中为我做一个例子如何在一切之后使用事务提交?
那会很棒,谢谢。
答案 0 :(得分:0)
请参阅此答案中的test_alchemy_core
函数:Why is SQLAlchemy insert with sqlite 25 times slower than using sqlite3 directly?它将向您展示如何在一个批处理中执行多个插入。你的问题是,你正在执行一个接一个的插入,这总是很慢。