在Sqoop导出时出现错误,我如何找出确切的错误?

时间:2018-11-10 17:15:29

标签: export sqoop

在执行sqoop导出时:

sqoop export --connect jdbc:mysql://ip-172-31-20-247/dbname --username uname --password pwd --table orders --export-dir /orders.txt

我遇到以下错误:

18/11/10 16:18:52 INFO mapreduce.Job:  map 0% reduce 0%
18/11/10 16:19:00 INFO mapreduce.Job:  map 100% reduce 0%
18/11/10 16:19:01 INFO mapreduce.Job: Job job_1537636876515_6580 failed with state FAILED due to: Task failed task_1537636876515_6580_m_000000
Job failed as tasks failed. failedMaps:1 failedReduces:0
18/11/10 16:19:01 INFO mapreduce.Job: Counters: 12
        Job Counters 
                Failed map tasks=1
                Killed map tasks=3
                Launched map tasks=4
                Data-local map tasks=4
                Total time spent by all maps in occupied slots (ms)=61530
                Total time spent by all reduces in occupied slots (ms)=0
                Total time spent by all map tasks (ms)=20510
                Total vcore-milliseconds taken by all map tasks=20510
                Total megabyte-milliseconds taken by all map tasks=31503360
        Map-Reduce Framework
                CPU time spent (ms)=0
                Physical memory (bytes) snapshot=0
                Virtual memory (bytes) snapshot=0
18/11/10 16:19:01 WARN mapreduce.Counters: Group FileSystemCounters is deprecated. Use org.apache.hadoop.mapreduce.FileSystemCounter instead
18/11/10 16:19:01 INFO mapreduce.ExportJobBase: Transferred 0 bytes in 17.1712 seconds (0 bytes/sec)
18/11/10 16:19:01 INFO mapreduce.ExportJobBase: Exported 0 records.
18/11/10 16:19:01 ERROR mapreduce.ExportJobBase: Export job failed!
18/11/10 16:19:01 ERROR tool.ExportTool: Error during export: Export job failed!

如何确定确切的错误是什么?

1 个答案:

答案 0 :(得分:0)

在不查看文件数据和其他详细信息的情况下,不确定sqoop导出作业如何进行。希望您使用正确的分隔符,并且文件布局和表结构保持同步。

您能否在更改参数后尝试以下sqoop导出脚本。 这里我有从hdfs文件到SQL Server的sqoop数据。

sqoop export \
--connect "jdbc:sqlserver://servername:1433;databaseName=EMP;" \
--connection-manager org.apache.sqoop.manager.SQLServerManager \
--username userid \
-P \
--table sql_server_table_name \
--input-fields-terminated-by '|' \
--export-dir /hdfs path location of file/part-m-00000 \
--num-mappers 1 \

让我知道它是否对您有用。我已经对其进行了几次测试,并且可以正常工作。我的数据以'|'分隔因此,我选择了以“ |”结尾的输入字段。您可以根据自己在hdfs上的数据进行选择。