我的任务是创建一个oozie工作流,每小时将数据加载到Hive表。
我在virtualbox中使用CDH 5.7
当我运行包含LOAD DATA INPATH' / sqoop_import_increment'的hive脚本时INTO TABLE客户;它完美地运行,数据被加载到配置单元表。
但是当我在oozie工作流上运行相同的脚本时,作业被杀死了66%并且错误消息是 Main class [org.apache.oozie.action.hadoop.HiveMain],退出代码[10001]
注意:但是create table的配置单元脚本与oozie工作流程完美配合。 请帮助。
hive脚本:
use test;
create external table if not exists customer(customer_id int,name string,address string)row format delimited fields terminated by ',';
load data inpath /sqoop_import_increment into table customer;
workflow.xml:
<workflow-app name="hive_script" xmlns="uri:oozie:workflow:0.5">
<start to="hive-4327"/>
<kill name="Kill">
<message>Action failed, error message[${wf:errorMessage(wf:lastErrorNode())}]</message>
</kill>
<action name="hive-4327" cred="hcat">
<hive xmlns="uri:oozie:hive-action:0.2">
<job-tracker>${jobTracker}</job-tracker>
<name-node>${nameNode}</name-node>
<job-xml>lib/hive-config.xml</job-xml>
<script>lib/impala-script.hql</script>
</hive>
<ok to="End"/>
<error to="Kill"/>
</action>
<end name="End"/>
</workflow-app>
job.properties:
oozie.use.system.libpath=True
security_enabled=False
dryrun=False
jobTracker=localhost:8032
nameNode=hdfs://quickstart.cloudera:8020
蜂房-config.xml中:
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<!-- Hive Configuration can either be stored in this file or in the hadoop configuration files -->
<!-- that are implied by Hadoop setup variables. -->
<!-- Aside from Hadoop setup variables - this file is provided as a convenience so that Hive -->
<!-- users do not have to edit hadoop configuration files (that may be managed as a centralized -->
<!-- resource). -->
<!-- Hive Execution Parameters -->
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://127.0.0.1/metastore?createDatabaseIfNotExist=true</value>
<description>JDBC connect string for a JDBC metastore</description>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
<description>Driver class name for a JDBC metastore</description>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>hive</value>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>cloudera</value>
</property>
<property>
<name>hive.hwi.war.file</name>
<value>/usr/lib/hive/lib/hive-hwi-0.8.1-cdh4.0.0.jar</value>
<description>This is the WAR file with the jsp content for Hive Web Interface</description>
</property>
<property>
<name>datanucleus.fixedDatastore</name>
<value>true</value>
</property>
<property>
<name>datanucleus.autoCreateSchema</name>
<value>false</value>
</property>
<property>
<name>hive.metastore.uris</name>
<value>thrift://127.0.0.1:9083</value>
<description>IP address (or fully-qualified domain name) and port of the metastore host</description>
</property>
</configuration>
答案 0 :(得分:0)
上次遇到此问题时,发现所有数据节点上都没有安装hive客户端。
当您手动运行配置单元查询时,您可能是从安装了配置单元客户端的节点执行此操作。但是当要求oozie运行查询时,它将从随机数据节点执行此操作。因此,您需要在所有数据节点上设置hive客户端。
这假设您无法让oozie一般运行配置单元查询(并且不会对此特定命令有任何特定问题)。