我编译了mapreduce代码(驱动程序,映射器和reducer类)并创建了Jar文件。当我在数据集上运行它时,它似乎无法运行。它只是返回到图像中所示的提示。有人建议吗?
非常感谢 basam
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.lib.input.KeyValueTextInputFormat;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.mapreduce.lib.input.TextInputFormat;
import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;
//This driver program will bring all the information needed to submit this Map reduce job.
public class MultiLangDictionary {
public static void main(String[] args) throws Exception{
if (args.length !=2){
System.err.println("Usage: MultiLangDictionary <input path> <output path>");
System.exit(-1);
}
Configuration conf = new Configuration();
Job ajob = new Job(conf, "MultiLangDictionary");
//Assigning the driver class name
ajob.setJarByClass(MultiLangDictionary.class);
FileInputFormat.addInputPath(ajob, new Path(args[0]));
//first argument is the job itself
//second argument is the location of the output dataset
FileOutputFormat.setOutputPath(ajob, new Path(args[1]));
ajob.setInputFormatClass(TextInputFormat.class);
ajob.setOutputFormatClass(TextOutputFormat.class);
//Defining the mapper class name
ajob.setMapperClass(MultiLangDictionaryMapper.class);
//Defining the Reducer class name
ajob.setReducerClass(MultiLangDictionaryReducer.class);
//setting the second argument as a path in a path variable
Path outputPath = new Path(args[1]);
//deleting the output path automatically from hdfs so that we don't have delete it explicitly
outputPath.getFileSystem(conf).delete(outputPath);
}
}
答案 0 :(得分:0)
在命令
中尝试使用java packagename.classnamehadoop jar MultiLangDictionary.jar [yourpackagename].MultiLangDictionary input output
答案 1 :(得分:0)
您可以尝试将Map和Reduce输出键类型添加到驱动程序中。像(这是一个例子):
job2.setMapOutputKeyClass(Text.class);
job2.setMapOutputValueClass(Text.class);
job2.setOutputKeyClass(Text.class);
job2.setOutputValueClass(Text.class);
在上面,Mapper和Reducer都会在(Text,Text)
方法中编写context.write()
。