Nutch - 出错:JAVA_HOME未设置。当试图爬行

时间:2014-07-16 02:25:58

标签: java hadoop cassandra nutch emr

首先,我是Nutch / Hadoop的新手。我已经安装了Cassandra。我已经在我的EMR集群的主节点上安装了Nutch。当我尝试使用以下命令执行爬网时:

sudo bin/crawl crawl urls -dir crawl -depth 3 -topN 5

我得到了

Error: JAVA_HOME is not set.

如果我在没有' sudo'的情况下运行命令我明白了:

    Injector: starting at 2014-07-16 02:12:24
Injector: crawlDb: urls/crawldb
Injector: urlDir: crawl
Injector: Converting injected urls to crawl db entries.
Injector: org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: file:/home/hadoop/apache-nutch-1.8/runtime/local/crawl
    at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:197)
    at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:208)
    at org.apache.hadoop.mapred.JobClient.writeOldSplits(JobClient.java:1081)
    at org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1073)
    at org.apache.hadoop.mapred.JobClient.access$700(JobClient.java:179)
    at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:983)
    at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:936)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
    at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:936)
    at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:910)
    at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1353)
    at org.apache.nutch.crawl.Injector.inject(Injector.java:279)
    at org.apache.nutch.crawl.Injector.run(Injector.java:316)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
    at org.apache.nutch.crawl.Injector.main(Injector.java:306)

我无法弄清楚这一点。我在这里看到了另一个论坛:Similar Topic

并且跟着它无济于事。我添加了

export JAVA_HOME=/usr/lib/jvm/java-7-oracle

export PATH=$PATH:${JAVA_HOME}/bin

到我的〜/ .bashrc,我正在使用Linux ..

任何帮助将不胜感激!!

1 个答案:

答案 0 :(得分:0)

问题是我在运行

sudo bin/crawl crawl urls -dir crawl -depth 3 -topN 5

我用过

bin/crawl ./urls/seed.txt TestCrawl http://localhost:8983/solr/ 5

一切都很好,只是一个畸形的命令......即“爬行”#39;已弃用,如下所述:Apache Nutch Tutorial