Mongo Hadoop连接器问题

时间:2014-11-19 15:03:22

标签: java mongodb hadoop connector

我正在尝试运行MapReduce作业:我从Mongo拉出然后写入HDFS,但我似乎无法运行。我找不到一个例子,但是如果我为Mongo的输出路径设置Mongo的输入路径,那么我遇到的问题。现在,当我的MongoDB实例没有身份验证时,我收到了身份验证错误。

final Configuration conf = getConf();
final Job job = new Job(conf, "sort");
MongoConfig config = new MongoConfig(conf);
MongoConfigUtil.setInputFormat(getConf(), MongoInputFormat.class);
FileOutputFormat.setOutputPath(job, new Path("/trythisdir"));
MongoConfigUtil.setInputURI(conf,"mongodb://localhost:27017/fake_data.file");
//conf.set("mongo.output.uri", "mongodb://localhost:27017/fake_data.file");
job.setJarByClass(imageExtractor.class);
job.setMapperClass(imageExtractorMapper.class);

job.setOutputKeyClass(Text.class);
job.setOutputValueClass(Text.class);

job.setInputFormatClass( MongoInputFormat.class );

// Execute job and return status
return job.waitForCompletion(true) ? 0 : 1;

编辑:这是我当前的错误:

Exception in thread "main" java.lang.IllegalArgumentException: Couldn't connect and authenticate to get collection
    at com.mongodb.hadoop.util.MongoConfigUtil.getCollection(MongoConfigUtil.java:353)
    at com.mongodb.hadoop.splitter.MongoSplitterFactory.getSplitterByStats(MongoSplitterFactory.java:71)
    at com.mongodb.hadoop.splitter.MongoSplitterFactory.getSplitter(MongoSplitterFactory.java:107)
    at com.mongodb.hadoop.MongoInputFormat.getSplits(MongoInputFormat.java:56)
    at org.apache.hadoop.mapred.JobClient.writeNewSplits(JobClient.java:1079)
    at org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1096)
    at org.apache.hadoop.mapred.JobClient.access$600(JobClient.java:177)
    at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:995)
    at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:948)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
    at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:948)
    at org.apache.hadoop.mapreduce.Job.submit(Job.java:566)
    at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:596)
    at com.orbis.image.extractor.mongo.imageExtractor.run(imageExtractor.java:103)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
    at com.orbis.image.extractor.mongo.imageExtractor.main(imageExtractor.java:78)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:208)
Caused by: java.lang.NullPointerException
    at com.mongodb.MongoURI.<init>(MongoURI.java:148)
    at com.mongodb.MongoClient.<init>(MongoClient.java:268)
    at com.mongodb.hadoop.util.MongoConfigUtil.getCollection(MongoConfigUtil.java:351)
    ... 22 more

3 个答案:

答案 0 :(得分:2)

迟到的答案..对人们来说可能是帮助。我在使用Apache Spark时遇到了同样的问题。

我认为你应该正确设置mongo.input.uri和mongo.output.uri,它们将由hadoop以及输入和输出格式使用。

/*Correct input and output uri setting on spark(hadoop)*/
conf.set("mongo.input.uri", "mongodb://localhost:27017/dbName.inputColName");
conf.set("mongo.output.uri", "mongodb://localhost:27017/dbName.outputColName");

/*Set input and output formats*/
job.setInputFormatClass( MongoInputFormat.class );
job.setOutputFormatClass( MongoOutputFormat.class )
是的,如果&#34; mongo.input.uri&#34;或&#34; mongo.output.uri&#34;字符串有拼写错误导致同样的错误。

答案 1 :(得分:1)

替换:

MongoConfigUtil.setInputURI(conf, "mongodb://localhost:27017/fake_data.file");

由:

MongoConfigUtil.setInputURI(job.getConfiguration(), "mongodb://localhost:27017/fake_data.file");

conf对象已被消费&#39;按你的工作,所以你需要直接在工作配置上设置它。

答案 2 :(得分:0)

您还没有共享完整的代码,因此很难说,但是您所拥有的内容与MongoDB Connector for Hadoop的典型用法并不一致。

我建议您从github中的examples开始。

相关问题