通过sqoop import

时间:2017-02-12 14:08:33

标签: hadoop hive sqoop

我正在尝试使用以下命令将mysql表导入HIVE,

sqoop import \
  --connect "jdbc:mysql://quickstart.cloudera:3306/retail_db" \
  --username=retail_dba \
  --password=cloudera \
  --table departments \
  --hive-import \
  --hive-overwrite \
  --create-hive-table \
  --num-mappers 1

但命令失败并出现错误,

  

100次尝试后无法获取IMPLICIT,SHARED锁定默认值。   FAILED:获取锁定时出错:锁定底层对象   无法获得。一段时间后重试

请为此建议解决方案/修复方法?

我在Cloudera quickstart vm5.8& Sqoop版本:1.4.6-cdh5.8.0

请查看下面的完整日志,

[cloudera@quickstart flume]$ sqoop import \
>   --connect "jdbc:mysql://quickstart.cloudera:3306/retail_db" \
>   --username=retail_dba \
>   --password=cloudera \
>   --table departments \
>   --hive-import \
>   --hive-overwrite \
>   --create-hive-table \
>   --num-mappers 1
Warning: /usr/lib/sqoop/../accumulo does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
17/02/11 09:26:11 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6-cdh5.8.0
17/02/11 09:26:11 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
17/02/11 09:26:11 INFO tool.BaseSqoopTool: Using Hive-specific delimiters for output. You can override
17/02/11 09:26:11 INFO tool.BaseSqoopTool: delimiters with --fields-terminated-by, etc.
17/02/11 09:26:12 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
17/02/11 09:26:12 INFO tool.CodeGenTool: Beginning code generation
17/02/11 09:26:13 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `departments` AS t LIMIT 1
17/02/11 09:26:13 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `departments` AS t LIMIT 1
17/02/11 09:26:13 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /usr/lib/hadoop-mapreduce
Note: /tmp/sqoop-cloudera/compile/a7785245077188e350b3c12ef9968189/departments.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
17/02/11 09:26:17 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-cloudera/compile/a7785245077188e350b3c12ef9968189/departments.jar
17/02/11 09:26:17 WARN manager.MySQLManager: It looks like you are importing from mysql.
17/02/11 09:26:17 WARN manager.MySQLManager: This transfer can be faster! Use the --direct
17/02/11 09:26:17 WARN manager.MySQLManager: option to exercise a MySQL-specific fast path.
17/02/11 09:26:17 INFO manager.MySQLManager: Setting zero DATETIME behavior to convertToNull (mysql)
17/02/11 09:26:17 INFO mapreduce.ImportJobBase: Beginning import of departments
17/02/11 09:26:18 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar
17/02/11 09:26:21 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
17/02/11 09:26:21 INFO client.RMProxy: Connecting to ResourceManager at quickstart.cloudera/172.16.237.138:8032
17/02/11 09:26:23 WARN hdfs.DFSClient: Caught exception 
java.lang.InterruptedException
    at java.lang.Object.wait(Native Method)
    at java.lang.Thread.join(Thread.java:1281)
    at java.lang.Thread.join(Thread.java:1355)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.closeResponder(DFSOutputStream.java:862)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.endBlock(DFSOutputStream.java:600)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:789)
17/02/11 09:26:25 INFO db.DBInputFormat: Using read commited transaction isolation
17/02/11 09:26:26 INFO mapreduce.JobSubmitter: number of splits:1
17/02/11 09:26:27 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1485875121168_0020
17/02/11 09:26:27 INFO impl.YarnClientImpl: Submitted application application_1485875121168_0020
17/02/11 09:26:27 INFO mapreduce.Job: The url to track the job: http://quickstart.cloudera:8088/proxy/application_1485875121168_0020/
17/02/11 09:26:27 INFO mapreduce.Job: Running job: job_1485875121168_0020
17/02/11 09:26:41 INFO mapreduce.Job: Job job_1485875121168_0020 running in uber mode : false
17/02/11 09:26:41 INFO mapreduce.Job:  map 0% reduce 0%
17/02/11 09:27:00 INFO mapreduce.Job:  map 100% reduce 0%
17/02/11 09:27:00 INFO mapreduce.Job: Job job_1485875121168_0020 completed successfully
17/02/11 09:27:00 INFO mapreduce.Job: Counters: 30
    File System Counters
        FILE: Number of bytes read=0
        FILE: Number of bytes written=142737
        FILE: Number of read operations=0
        FILE: Number of large read operations=0
        FILE: Number of write operations=0
        HDFS: Number of bytes read=87
        HDFS: Number of bytes written=60
        HDFS: Number of read operations=4
        HDFS: Number of large read operations=0
        HDFS: Number of write operations=2
    Job Counters 
        Launched map tasks=1
        Other local map tasks=1
        Total time spent by all maps in occupied slots (ms)=8251904
        Total time spent by all reduces in occupied slots (ms)=0
        Total time spent by all map tasks (ms)=16117
        Total vcore-seconds taken by all map tasks=16117
        Total megabyte-seconds taken by all map tasks=8251904
    Map-Reduce Framework
        Map input records=6
        Map output records=6
        Input split bytes=87
        Spilled Records=0
        Failed Shuffles=0
        Merged Map outputs=0
        GC time elapsed (ms)=248
        CPU time spent (ms)=1310
        Physical memory (bytes) snapshot=134692864
        Virtual memory (bytes) snapshot=728621056
        Total committed heap usage (bytes)=48234496
    File Input Format Counters 
        Bytes Read=0
    File Output Format Counters 
        Bytes Written=60
17/02/11 09:27:00 INFO mapreduce.ImportJobBase: Transferred 60 bytes in 39.6446 seconds (1.5134 bytes/sec)
17/02/11 09:27:00 INFO mapreduce.ImportJobBase: Retrieved 6 records.
17/02/11 09:27:01 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `departments` AS t LIMIT 1
17/02/11 09:27:01 INFO hive.HiveImport: Loading uploaded data into Hive

Logging initialized using configuration in jar:file:/usr/lib/hive/lib/hive-common-1.1.0-cdh5.8.0.jar!/hive-log4j.properties


Unable to acquire IMPLICIT, SHARED lock default after 100 attempts.
FAILED: Error in acquiring locks: Locks on the underlying objects cannot be acquired. retry after some time

1 个答案:

答案 0 :(得分:0)

我在这里发现了这个问题。 Zookeeper以前没有开始过。

启动服务后,命令就完成了!