Hadoop C ++ HDFS测试运行Exception

时间:2014-01-11 15:31:50

标签: c++ exception hadoop hdfs

我正在使用Hadoop 2.2.0并尝试运行此 hdfs_test.cpp 应用程序:

#include "hdfs.h" 

int main(int argc, char **argv) {

    hdfsFS fs = hdfsConnect("default", 0);
    const char* writePath = "/tmp/testfile.txt";
    hdfsFile writeFile = hdfsOpenFile(fs, writePath, O_WRONLY|O_CREAT, 0, 0, 0);
    if(!writeFile) {
          fprintf(stderr, "Failed to open %s for writing!\n", writePath);
          exit(-1);
    }
    char* buffer = "Hello, World!";
    tSize num_written_bytes = hdfsWrite(fs, writeFile, (void*)buffer, strlen(buffer)+1);
    if (hdfsFlush(fs, writeFile)) {
           fprintf(stderr, "Failed to 'flush' %s\n", writePath); 
          exit(-1);
    }
   hdfsCloseFile(fs, writeFile);
}

我编译了它但是当我用 ./ hdfs_test 运行它时,我有这个:

loadFileSystems error:
(unable to get stack trace for java.lang.NoClassDefFoundError exception: ExceptionUtils::getStackTrace error.)
hdfsBuilderConnect(forceNewInstance=0, nn=default, port=0, kerbTicketCachePath=(NULL), userName=(NULL)) error:
(unable to get stack trace for java.lang.NoClassDefFoundError exception: ExceptionUtils::getStackTrace error.)
hdfsOpenFile(/tmp/testfile.txt): constructNewObjectOfPath error:
(unable to get stack trace for java.lang.NoClassDefFoundError exception: ExceptionUtils::getStackTrace error.)
Failed to open /tmp/testfile.txt for writing!

类似路径可能存在问题。 我的$ HADOOP_HOME是/ usr / local / hadoop,这是我的变量* CLASSPATH *

echo $CLASSPATH
/usr/local/hadoop/etc/hadoop:/usr/local/hadoop/share/hadoop/common/lib/*:/usr/local/hadoop/share/hadoop/common/*:/usr/local/hadoop/share/hadoop/hdfs:/usr/local/hadoop/share/hadoop/hdfs/lib/*:/usr/local/hadoop/share/hadoop/hdfs/*:/usr/local/hadoop/share/hadoop/yarn/lib/*:/usr/local/hadoop/share/hadoop/yarn/*:/usr/local/hadoop/share/hadoop/mapreduce/lib/*:/usr/local/hadoop/share/hadoop/mapreduce/*:/contrib/capacity-scheduler/*.jar

感谢任何帮助..谢谢

3 个答案:

答案 0 :(得分:5)

试试这个:

hadoop classpath --glob

然后将结果添加到CLASSPATH

中的~/.bashrc变量

答案 1 :(得分:3)

在使用基于JNI的程序时,我遇到了在classpath中使用通配符的问题。尝试使用 direct-jar-in-classpath 方法,例如我在https://github.com/QwertyManiac/cdh4-libhdfs-example/blob/master/exec.sh#L3的示例代码中生成的方法,我相信它应该可行。 https://github.com/QwertyManiac/cdh4-libhdfs-example处的整个示例确实有效。

另见https://stackoverflow.com/a/9322747/1660002

答案 2 :(得分:0)

JNI不会使用通配符CLASSPATH。因此,仅添加hadoop classpath --glob的结果将不起作用。 正确的方法是:

export CLASSPATH=${HADOOP_HOME}/etc/hadoop:`find ${HADOOP_HOME}/share/hadoop/ | awk '{path=path":"$0}END{print path}'`
export LD_LIBRARY_PATH="${HADOOP_HOME}/lib/native":$LD_LIBRARY_PATH