Sqoop错误无关输入't1'期望'<eof>'附近的EOF </eof>

时间:2014-03-19 15:53:12

标签: hadoop sqoop

我正在尝试将一些数据从hive集群导入另一个具有多个映射器的HDFS集群。我使用下面的命令导入数据。

/opt/isv/app/pkgs/sqoop-1.4.4.bin__hadoop-1.0.0/bin/sqoop import --connect jdbc:hive://XXXXXX.com:10000 / strrecommender --driver org.apache .hadoop.hive.jdbc.HiveDriver -e&#39;从strrecommender.sltrn_dtl_full中选择upc_cd,sltrn_dt,sltrn_id,loc_id,pos_rgstr_id,hh_id,其中TO_DATE(part_dt)&gt; =&#34; 2011-03-04&#34; AND TO_DATE(part_dt)&lt;&#34; 2011-03-11&#34; AND $ CONDITIONS&#39; --target -dir / user / rxg3437 / QADataThroughSqoopWeekly / ramesh -m 2 --split-by sltrn_dt

此命令在内部生成另一个查询以获取最小和最大日期。

SELECT MIN(sltrn_dt),MAX(sltrn_dt)FROM(从strrecommender.sltrn_dtl_full中选择upc_cd,sltrn_dt,sltrn_id,loc_id,pos_rgstr_id,hh_id,其中TO_DATE(part_dt)&gt; =&#34; 2011-03-04&#34 ; AND TO_DATE(part_dt)AND(1 = 1))AS t1

并且此查询失败并显示以下错误:

14/03/19 11:43:12错误工具.ImportTool:遇到IOException正在运行导入作业:java.io.IOException:java.sql.SQLException:查询返回非零代码:40000,原因:FAILED:ParseExce 第1行:195个无关输入&#39; t1&#39;期待EOF接近&#39;

    at org.apache.sqoop.mapreduce.db.DataDrivenDBInputFormat.getSplits(DataDrivenDBInputFormat.java:170)
    at org.apache.hadoop.mapred.JobClient.writeNewSplits(JobClient.java:1054)
    at org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1071)
    at org.apache.hadoop.mapred.JobClient.access$700(JobClient.java:179)
    at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:983)
    at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:936)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
    at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:936)
    at org.apache.hadoop.mapreduce.Job.submit(Job.java:550)
    at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:580)
    at org.apache.sqoop.mapreduce.ImportJobBase.doSubmitJob(ImportJobBase.java:186)
    at org.apache.sqoop.mapreduce.ImportJobBase.runJob(ImportJobBase.java:159)
    at org.apache.sqoop.mapreduce.ImportJobBase.runImport(ImportJobBase.java:239)
    at org.apache.sqoop.manager.SqlManager.importQuery(SqlManager.java:645)
    at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:415)
    at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:502)
    at org.apache.sqoop.Sqoop.run(Sqoop.java:145)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
    at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:181)
    at org.apache.sqoop.Sqoop.runTool(Sqoop.java:220)
    at org.apache.sqoop.Sqoop.runTool(Sqoop.java:229)
    at org.apache.sqoop.Sqoop.main(Sqoop.java:238)

引起:java.sql.SQLException:查询返回非零代码:40000,原因:FAILED:ParseException第1行:195外部输入&#39; t1&#39;期待EOF接近&#39;

    at org.apache.hadoop.hive.jdbc.HiveStatement.executeQuery(HiveStatement.java:194)
    at org.apache.sqoop.mapreduce.db.DataDrivenDBInputFormat.getSplits(DataDrivenDBInputFormat.java:145)
    ... 23 more

有人可以帮忙吗?

1 个答案:

答案 0 :(得分:0)

您不应该使用-e来查询,而是使用--query。这是sqoop官方文件中的示例:

17.3. Example Invocations
   Select ten records from the employees table:
   $ sqoop eval --connect jdbc:mysql://db.example.com/corp \
   --query "SELECT * FROM employees LIMIT 10"

     Insert a row into the foo table:
   $ sqoop eval --connect jdbc:mysql://db.example.com/corp \
     -e "INSERT INTO foo VALUES(42, 'bar')"