Hadoop:java.io.IOException:在本地异常上调用localhost / 127.0.0.1:54310失败:java.io.EOFException

时间:2014-08-05 03:36:51

标签: java hadoop filesystems hdfs hadoop-streaming

是hadoop的新手,今天只有我开始使用它, 我想把文件写入hdfs hadoop服务器,Am使用服务器hadoop 1.2.1,当我在cli中给出jps命令时能够看到所有节点都在运行,

31895 Jps
29419 SecondaryNameNode
29745 TaskTracker
29257 DataNode

这是我将文件写入hdfs系统的示例客户端代码

public static void main(String[] args) 
   {
        try {
          //1. Get the instance of COnfiguration
          Configuration configuration = new Configuration();
          configuration.addResource(new Path("/data/WorkArea/hadoop/hadoop-1.2.1/hadoop-1.2.1/conf/core-site.xml"));
          configuration.addResource(new Path("/data/WorkArea/hadoop/hadoop-1.2.1/hadoop-1.2.1/conf/hdfs-site.xml"));
          //2. Create an InputStream to read the data from local file
          InputStream inputStream = new BufferedInputStream(new FileInputStream("/home/local/PAYODA/hariprasanth.l/Desktop/ProjectionTest"));
          //3. Get the HDFS instance
          FileSystem hdfs = FileSystem.get(new URI("hdfs://localhost:54310"), configuration);
          //4. Open a OutputStream to write the data, this can be obtained from the FileSytem
          OutputStream outputStream = hdfs.create(new Path("hdfs://localhost:54310/user/hadoop/Hadoop_File.txt"),
          new Progressable() {  
                  @Override
                  public void progress() {
             System.out.println("....");
                  }
                        });
          try
          {
            IOUtils.copyBytes(inputStream, outputStream, 4096, false); 
          }
          finally
          {
            IOUtils.closeStream(inputStream);
            IOUtils.closeStream(outputStream);
          } 
       } catch (Exception e) {
           e.printStackTrace();
       }
   }

运行代码时的异常

java.io.IOException: Call to localhost/127.0.0.1:54310 failed on local exception: java.io.EOFException
at org.apache.hadoop.ipc.Client.wrapException(Client.java:1063)
at org.apache.hadoop.ipc.Client.call(Client.java:1031)
at org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:198)
at com.sun.proxy.$Proxy0.getProtocolVersion(Unknown Source)
at org.apache.hadoop.ipc.WritableRpcEngine.getProxy(WritableRpcEngine.java:235)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:275)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:249)
at org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:163)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:283)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:247)
at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:109)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1792)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:76)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:1826)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1808)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:265)
at com.test.hadoop.writefiles.FileWriter.main(FileWriter.java:27)
Caused by: java.io.EOFException
at java.io.DataInputStream.readInt(DataInputStream.java:392)
at org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:760)
at org.apache.hadoop.ipc.Client$Connection.run(Client.java:698)

当我调试它时,当我尝试连接到hdfs本地服务器时,错误发生在该行中,

  FileSystem hdfs = FileSystem.get(new URI("hdfs://localhost:54310"), configuration);

就像我用谷歌搜索的那样,它表明我与版本不匹配,

hadoop的服务器版本是 - 1.2.1 正在使用的客户端jar是

hadoop-common-0.22.0.jar
hadoop-hdfs-0.22.0.jar

请尽快告诉我这个问题,

如果可能,建议我在哪里可以找到hadoop的客户端罐子,也可以给罐子命名......请...

此致 哈

2 个答案:

答案 0 :(得分:2)

这是因为在不同的jar(即hadoop commonshadoop core中具有相同类的相同类表示。 实际上我对使用相应的罐子感到困惑。

最后我最终使用了apache hadoop core。它就像一只苍蝇。

答案 1 :(得分:0)

没有运行NameNode。问题出在您的Namenode上。你在启动之前格式化了NameNode吗?

hadoop namenode -format