如何在Hive查询中使用mysql查询结果

时间:2018-06-29 13:14:49

标签: linux shell hadoop hive

我有一个要求,例如,我已经在以下代码中实现了将mysql查询结果用于shell脚本中的蜂巢查询

ctrl_tbl_date=`mysql -N -h ${Mysql_Host_Name} -u ${Mysql_Uname} -p${Mysql_pwd} -e "use ${MySQLDB};select max(Processing_Datetime) from ${Ctrl_Tbl}:"`

echo "$ctrl_tbl_date">>/edh_fw/scripts/sqoop_export_by_key/out_test6.txt 

echo "taking the data which is satisfying below conditions"

temp=hive -v -e "set hive.exec.compress.output=false;insert overwrite directory '${temp_incremental_loc}' row format delimited fields terminated by '\t' stored as textfile select * from ${src_table} where createDate>'${ctrl_tbl_date}';"

echo "$temp">>/edh_fw/scripts/sqoop_export_by_key/out_test7.txt 

我从脚本顶部初始化的执行行将每个值作为动态值传递。.在这里,我没有将ctrl_tbl_date放入配置单元查询中,当然,MYSQL结果没有分配给变量...请帮助我

1 个答案:

答案 0 :(得分:0)

使用

 temp=$(hive -v -e "set hive.exec.compress.output=false;insert overwrite directory '${temp_incremental_loc}' row format delimited fields terminated by '\t' stored as textfile select * from ${src_table} where createDate>'${ctrl_tbl_date}';")

以便将内部查询的结果放入temp变量...

现在,不使用$(),如下所示

temp=hive -v -e "set hive.exec.compress.output=false;insert overwrite directory '${temp_incremental_loc}' row format delimited fields terminated by '\t' stored as textfile select * from ${src_table} where createDate>'${ctrl_tbl_date}';"

您只是将右侧的内容分配给左侧,但实际上您想将右侧的执行结果分配给左侧...