我在配置单元中有一个表,其中包含351837条记录(110 MB大小),我正在使用python读取此表并写入sql服务器。
在此过程中,将数据从蜂巢读取到熊猫数据帧时需要花费很长时间。当我加载全部记录(351k)时,需要90分钟。
为了改进,我采用了以下方法,例如从蜂巢中读取1万行并写入sql server。但是,仅从配置单元读取1万行并将其分配给Dataframe仅需要4-5分钟的时间。
def execute_hadoop_export():
"""
This will run the steps required for a Hadoop Export.
Return Values is boolean for success fail
"""
try:
hql='select * from db.table '
# Open Hive ODBC Connection
src_conn = pyodbc.connect("DSN=****",autocommit=True)
cursor=src_conn.cursor()
#tgt_conn = pyodbc.connect(target_connection)
# Using SQLAlchemy to dynamically generate query and leverage dataframe.to_sql to write to sql server...
sql_conn_url = urllib.quote_plus('DRIVER={ODBC Driver 13 for SQL Server};SERVER=Xyz;DATABASE=Db2;UID=ee;PWD=*****')
sql_conn_str = "mssql+pyodbc:///?odbc_connect={0}".format(sql_conn_url)
engine = sqlalchemy.create_engine(sql_conn_str)
# read source table.
vstart=datetime.datetime.now()
for df in pandas.read_sql(hql, src_conn,chunksize=10000):
vfinish=datetime.datetime.now()
print 'Finished 10k rows reading from hive and it took', (vfinish-vstart).seconds/60.0,' minutes'
# Get connection string for target from Ctrl.Connnection
df.to_sql(name='table', schema='dbo', con=engine, chunksize=10000, if_exists="append", index=False)
print 'Finished 10k rows writing into sql server and it took', (datetime.datetime.now()-vfinish).seconds/60.0, ' minutes'
vstart=datetime.datetime.now()
cursor.Close()
except Exception, e:
print str(e)
输出:
在python中读取配置单元表数据最快的方法是什么?
更新配置单元表结构
CREATE TABLE `table1`(
`policynumber` varchar(15),
`unitidentifier` int,
`unitvin` varchar(150),
`unitdescription` varchar(100),
`unitmodelyear` varchar(4),
`unitpremium` decimal(18,2),
`garagelocation` varchar(150),
`garagestate` varchar(50),
`bodilyinjuryoccurrence` decimal(18,2),
`bodilyinjuryaggregate` decimal(18,2),
`bodilyinjurypremium` decimal(18,2),
`propertydamagelimits` decimal(18,2),
`propertydamagepremium` decimal(18,2),
`medicallimits` decimal(18,2),
`medicalpremium` decimal(18,2),
`uninsuredmotoristoccurrence` decimal(18,2),
`uninsuredmotoristaggregate` decimal(18,2),
`uninsuredmotoristpremium` decimal(18,2),
`underinsuredmotoristoccurrence` decimal(18,2),
`underinsuredmotoristaggregate` decimal(18,2),
`underinsuredmotoristpremium` decimal(18,2),
`umpdoccurrence` decimal(18,2),
`umpddeductible` decimal(18,2),
`umpdpremium` decimal(18,2),
`comprehensivedeductible` decimal(18,2),
`comprehensivepremium` decimal(18,2),
`collisiondeductible` decimal(18,2),
`collisionpremium` decimal(18,2),
`emergencyroadservicepremium` decimal(18,2),
`autohomecredit` tinyint,
`lossfreecredit` tinyint,
`multipleautopoliciescredit` tinyint,
`hybridcredit` tinyint,
`goodstudentcredit` tinyint,
`multipleautocredit` tinyint,
`fortyfivepluscredit` tinyint,
`passiverestraintcredit` tinyint,
`defensivedrivercredit` tinyint,
`antitheftcredit` tinyint,
`antilockbrakescredit` tinyint,
`perkcredit` tinyint,
`plantype` varchar(100),
`costnew` decimal(18,2),
`isnocontinuousinsurancesurcharge` tinyint)
CLUSTERED BY (
policynumber,
unitidentifier)
INTO 50 BUCKETS
注意:我也尝试了sqoop导出选项,但是我的配置单元表已经处于存储桶格式。
答案 0 :(得分:3)
我尝试了多处理,我可以将其从2小时减少8-10分钟。请找到以下脚本。
calloc
--------- query.py文件-------
from multiprocessing import Pool
import pandas as pd
import datetime
from query import hivetable
from write_tosql import write_to_sql
p = Pool(37)
lst=[]
#we have 351k rows so generating series to use in hivetable method
for i in range(1,360000,10000):
lst.append(i)
print 'started reading ',datetime.datetime.now()
#we have 40 cores in cluster
p = Pool(37)
s=p.map(hivetable, [i for i in lst])
s_df=pd.concat(s)
print 'finished reading ',datetime.datetime.now()
print 'Started writing to sql server ',datetime.datetime.now()
write_to_sql(s_df)
print 'Finished writing to sql server ',datetime.datetime.now()
--------- Write_tosql.py文件---------
import pyodbc
from multiprocessing import Pool
from functools import partial
import pandas as pd
conn = pyodbc.connect("DSN=******",autocommit=True)
def hivetable(row):
query = 'select * from (select row_number() OVER (order by policynumber) as rownum, * from dbg.tble ) tbl1 where rownum between '+str(row) +' and '+str(row+9999)+';'
result = pd.read_sql(query,conn)
return result
任何其他解决方案都可以帮助我减少时间。
答案 1 :(得分:3)
使用cmd.get_results之后,使用Pandas从磁盘读取输出的最佳方法是什么? (例如,来自Hive命令)。 例如,考虑以下内容:
out_file = 'results.csv'
delimiter = chr(1)
....
Qubole.configure(qubole_key)
hc_params = ['--query', query]
hive_args = HiveCommand.parse(hc_params)
cmd = HiveCommand.run(**hive_args)
if (HiveCommand.is_success(cmd.status)):
with open(out_file, 'wt') as writer:
cmd.get_results(writer, delim=delimiter, inline=False)
如果成功运行查询后,然后检查result.csv的前几个字节,则会看到以下内容:
$ head -c 300 results.csv
b'flight_uid\twinning_price\tbid_price\timpressions_source_timestamp\n'b'0FY6ZsrnMy\x012000\x012270.0\x011427243278000\n0FamrXG9AW\x01710\x01747.0\x011427243733000\n0FY6ZsrnMy\x012000\x012270.0\x011427245266000\n0FY6ZsrnMy\x012000\x012270.0\x011427245088000\n0FamrXG9AW\x01330\x01747.0\x011427243407000\n0FamrXG9AW\x01710\x01747.0\x011427243981000\n0FamrXG9AW\x01490\x01747.0\x011427245289000\n
当我尝试在Pandas中打开此文件时:
df = pd.read_csv('results.csv')
它显然不起作用(我得到一个空的DataFrame),因为它没有正确格式化为csv文件。 虽然我可以尝试打开results.csv并对其进行后期处理(以删除b'等),然后再在Pandas中打开它,但这是一种非常不方便的加载方式。 我是否正确使用界面?这是使用三个小时前的最新版本的qds_sdk:1.4.2。