我正在尝试使用eclipse在hadoop多节点集群上运行wordcount java程序(在单节点集群中工作正常但在多节点中不起作用)。我回来了以下信息
INFO ipc.Client:重试连接到服务器:localhost / 127.0.0.1:54310。已经尝试了0次;重试策略是RetryUpToMaximumCountWithFixedSleep(maxRetries = 10,sleepTime = 1 SECONDS) 16/04/24 21:30:46 INFO ipc.Client:重试连接到服务器:localhost / 127.0.0.1:54310。已经尝试了1次;重试策略是RetryUpToMaximumCountWithFixedSleep(maxRetries = 10,sleepTime = 1 SECONDS)
public static void main(String[] args) throws Exception
{
Configuration conf = new Configuration();
Job job = new Job(conf, "wordcount");
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
job.setMapperClass(Map.class);
job.setReducerClass(Reduce.class);
job.setInputFormatClass(TextInputFormat.class);
job.setOutputFormatClass(TextOutputFormat.class);
FileInputFormat.addInputPath(job, new Path("hdfs://localhost:54310/user/hduser/sam/"));
FileOutputFormat.setOutputPath(job, new Path("hdfs://localhost:54310/user/hduser/wc-output"));
job.waitForCompletion(true);
}
} *
我认为路径有问题。 我在主端运行此代码
答案 0 :(得分:0)
是否有命令
hdfs dfs -ls hdfs://localhost:54310/user/hduser/sam/
工作?