我在AWS EMR中运行Sqoop。我试图将一个表~10 GB从MySQL复制到HDFS。
我收到以下异常
15/07/06 12:19:07 INFO mapreduce.Job: Task Id : attempt_1435664372091_0048_m_000000_2, Status : FAILED
Error: java.io.IOException: mysqldump terminated with status 3
at org.apache.sqoop.mapreduce.MySQLDumpMapper.map(MySQLDumpMapper.java:485)
at org.apache.sqoop.mapreduce.MySQLDumpMapper.map(MySQLDumpMapper.java:49)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:152)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:773)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:175)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:170)
15/07/06 12:19:07 INFO mapreduce.Job: Task Id : attempt_1435664372091_0048_m_000005_2, Status : FAILED
Error: java.io.IOException: mysqldump terminated with status 2
at org.apache.sqoop.mapreduce.MySQLDumpMapper.map(MySQLDumpMapper.java:485)
at org.apache.sqoop.mapreduce.MySQLDumpMapper.map(MySQLDumpMapper.java:49)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:152)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:773)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:175)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:170)
15/07/06 12:19:08 INFO mapreduce.Job: map 0% reduce 0%
15/07/06 12:19:20 INFO mapreduce.Job: map 25% reduce 0%
15/07/06 12:19:22 INFO mapreduce.Job: map 38% reduce 0%
15/07/06 12:19:23 INFO mapreduce.Job: map 50% reduce 0%
15/07/06 12:19:24 INFO mapreduce.Job: map 75% reduce 0%
15/07/06 12:19:25 INFO mapreduce.Job: map 100% reduce 0%
15/07/06 12:23:11 INFO mapreduce.Job: Job job_1435664372091_0048 failed with state FAILED due to: Task failed task_1435664372091_0048_m_000000
Job failed as tasks failed. failedMaps:1 failedReduces:0
15/07/06 12:23:11 INFO mapreduce.Job: Counters: 8
Job Counters
Failed map tasks=28
Launched map tasks=28
Other local map tasks=28
Total time spent by all maps in occupied slots (ms)=34760760
Total time spent by all reduces in occupied slots (ms)=0
Total time spent by all map tasks (ms)=5793460
Total vcore-seconds taken by all map tasks=5793460
Total megabyte-seconds taken by all map tasks=8342582400
15/07/06 12:23:11 WARN mapreduce.Counters: Group FileSystemCounters is deprecated. Use org.apache.hadoop.mapreduce.FileSystemCounter instead
15/07/06 12:23:11 INFO mapreduce.ImportJobBase: Transferred 0 bytes in 829.8697 seconds (0 bytes/sec)
15/07/06 12:23:11 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead
15/07/06 12:23:11 INFO mapreduce.ImportJobBase: Retrieved 0 records.
15/07/06 12:23:11 ERROR tool.ImportTool: Error during import: Import job failed!
如果我使用'--direct'选项运行,我会收到https://issues.cloudera.org/browse/SQOOP-186
中的通讯异常我已将MySQL中的'net-write-timeout'和'net-read-timeout'值设置为6000。
我的Sqoop命令看起来像这样
sqoop import -D mapred.task.timeout=0 --fields-terminated-by '\t' --escaped-by '\\' --optionally-enclosed-by '\"' --bindir ./ --connect jdbc:mysql://<remote ip>/<mysql db> --username tuser --password tuser --table table1 --target-dir=/base/table1 --split-by id -m 8 --direct
如何修复?我错过了什么。
我还创建了SQOOP JIRA - https://issues.apache.org/jira/browse/SQOOP-2411
答案 0 :(得分:1)
我已经看到当Sqoop无法均匀地划分密钥空间并且其中一个映射任务处理零行数据时会发生此错误。可能的解决方法是更改映射器的数量(-n
)或指定具有均匀分布值的其他键列(--split-by
)。
答案 1 :(得分:0)
您可以尝试运行以下命令,看看它是否有效。不确定,但我想你的sqoop import命令有问题。
sqoop import --connect "jdbc:mysql://<remote ip>/<mysql db>" --password "core" --username "core" --table "TABLENAME" --target-dir "/sqoopfile2" -m 8 --direct