我们有两个节点hadoop纱线群集,它是hadoop 2.2,我们已经使用oozie在一个工作流程中安排了两个动作,第一个动作是 python map-reduce流动作 ,第二个是 sqoop export 作业,它实际上将map-reduce流动作的输出传输到mysql数据库。
成功执行流式传输操作,这将导致sqoop作业的启动,该作业将一直运行。
stdout 会产生以下结果。
Sqoop command arguments :
export
--connect
jdbc:mysql://localhost/database
--username
root
--password
root
--table
tableName
--direct
--export-dir
/user/hduser/oozieProject/workflow/output
=================================================================
Invoking Sqoop command line now >>>
2137 [main] WARN org.apache.sqoop.tool.SqoopTool - $SQOOP_CONF_DIR has not been set in the environment. Cannot check for additional configuration.
2158 [main] INFO org.apache.sqoop.Sqoop - Running Sqoop version: 1.4.4.2.0.6.1-102
2170 [main] WARN org.apache.sqoop.tool.BaseSqoopTool - Setting your password on the command-line is insecure. Consider using -P instead.
2178 [main] WARN org.apache.sqoop.ConnFactory - $SQOOP_CONF_DIR has not been set in the environment. Cannot check for additional configuration.
2197 [main] INFO org.apache.sqoop.manager.MySQLManager - Preparing to use a MySQL streaming resultset.
2197 [main] INFO org.apache.sqoop.tool.CodeGenTool - Beginning code generation
2464 [main] INFO org.apache.sqoop.manager.SqlManager - Executing SQL statement: SELECT t.* FROM `missedCalls` AS t LIMIT 1
2483 [main] INFO org.apache.sqoop.manager.SqlManager - Executing SQL statement: SELECT t.* FROM `missedCalls` AS t LIMIT 1
2485 [main] INFO org.apache.sqoop.orm.CompilationManager - HADOOP_MAPRED_HOME is /usr/local/hadoop
3838 [main] INFO org.apache.sqoop.orm.CompilationManager - Writing jar file: /tmp/sqoop-hduser/compile/21bd1d5fe13adeed4f46a09f8b3d38fe/missedCalls.jar
3847 [main] INFO org.apache.sqoop.mapreduce.ExportJobBase - Beginning export of missedCalls
Heart beat
Heart beat
Heart beat
Heart beat
Heart beat
Heart beat
Heart beat
Heart beat
作业属性如下
nameNode=hdfs://master:54310
jobTracker=master:8035
queueName=default
oozie.libpath=${nameNode}/user/hduser/share/lib
oozie.use.system.libpath=true
oozie.wf.rerun.failnodes=true
oozieProjectRoot=${nameNode}/user/hduser/oozieProject
appPath=${oozieProjectRoot}/workflow
oozie.wf.application.path=${appPath}
oozieLibPath=${oozie.libpath}
mapred.tasktracker.map.tasks.maximum=4
mapred.tasktracker.reduce.tasks.maximum=4
inputDir=${oozieProjectRoot}/data/*
outputDir=${appPath}/output
工作流程xml如下
<!--Oozie workflow file: workflow.xml -->
<workflow-app name="WorkflowStreamingMRAction-Python" xmlns="uri:oozie:workflow:0.1">
<start to="streamingaAction"/>
<action name="streamingaAction">
<map-reduce>
<job-tracker>${jobTracker}</job-tracker>
<name-node>${nameNode}</name-node>
<prepare>
<delete path="${outputDir}"/>
</prepare>
<streaming>
<mapper>python mapper.py</mapper>
<reducer>python reducer.py</reducer>
</streaming>
<configuration>
<property>
<name>oozie.libpath</name>
<value>${oozieLibPath}/mapreduce-streaming</value>
</property>
<property>
<name>mapred.input.dir</name>
<value>${inputDir}</value>
</property>
<property>
<name>mapred.output.dir</name>
<value>${outputDir}</value>
</property>
<property>
<name>mapred.reduce.tasks</name>
<value>4</value>
</property>
</configuration>
<file>${appPath}/mapper.py#mapper.py</file>
<file>${appPath}/reducer.py#reducer.py</file>
</map-reduce>
<ok to="sqoopAction"/>
<error to="killJobAction"/>
</action>
<action name="sqoopAction">
<sqoop xmlns="uri:oozie:sqoop-action:0.2">
<job-tracker>${jobTracker}</job-tracker>
<name-node>${nameNode}</name-node>
<command>export --connect jdbc:mysql://localhost/database --username root --password myPwd --table tableName --direct --export-dir /user/hduser/oozieProject/workflow/output</command>
</sqoop>
<ok to="end"/>
<error to="killJobAction"/>
</action>
<kill name="killJobAction">
<message>"Killed job due to error: ${wf:errorMessage(wf:lastErrorNode())}"</message>
</kill>
<end name="end" />
请告知可能出错的地方?
谢谢
答案 0 :(得分:1)
它没有永远运行。你只需要等待。
首先,您在上面看到的Sqoop出口工作只是一个Oozie计划工作。 Heart beat
表示它现在正在运行。你只需要等待。实际上你可以进入YARN资源管理器页面(通常是http:// $ namenode:8088 / cluster),然后你可以找到&#34; real&#34; Sqoop出口工作。 (我想默认的映射器数量是4。)
其次,Sqoop执行&#34; export&#34;通过使用INSERT
语句,所以它相对较慢。当表很大时,我建议不要使用Sqoop导出,例如,当它有超过100万个条目时。
第三,既然我注意到你试图导出到MySQL,你可以尝试批处理模式,它以这种方式运行INSERT
查询:INSERT INTO <TABLE> VALUES (<ROW1>), (<ROW2>), etc.
因此您可以将命令更改为:
sqoop export -D sqoop.export.records.per.statement=1000 --connect jdbc:mysql://localhost/database --username root --password myPwd --table tableName --direct --export-dir /user/hduser/oozieProject/workflow/output --batch