将数据发送到Solr时,Nutch作业失败

时间:2013-09-12 14:49:45

标签: search solr nutch

我一直在尝试各种各样的事情但没有用。我对Nutch / Solr的配置基于:

http://ubuntuforums.org/showthread.php?t=1532230

现在我已经启动并运行Nutch和Solr,我想使用Solr来索引爬网数据。 Nutch成功抓取我指定的域但在运行命令将该数据传递给Solr时失败。这是命令:

bin/nutch solrindex http://solr:8181/solr/ crawl/crawldb crawl/linkdb crawl/segments/*

这是输出:

Indexer: starting at 2013-09-12 10:34:43
Indexer: deleting gone documents: false
Indexer: URL filtering: false
Indexer: URL normalizing: false
Active IndexWriters :
SOLRIndexWriter
solr.server.url : URL of the SOLR instance (mandatory)
solr.commit.size : buffer size when sending to SOLR (default 1000)
solr.mapping.file : name of the mapping file for fields (default solrindex-mapping.xml)
solr.auth : use authentication (default false)
solr.auth.username : use authentication (default false)
solr.auth : username for authentication
solr.auth.password : password for authentication


Indexer: org.apache.hadoop.mapred.InvalidInputException: Input path does not exist:             file:/usr/share/apache-nutch-1.7/crawl/linkdb/crawl_fetch
Input path does not exist: file:/usr/share/apache-nutch-1.7/crawl/linkdb/crawl_parse
Input path does not exist: file:/usr/share/apache-nutch-1.7/crawl/linkdb/parse_data
Input path does not exist: file:/usr/share/apache-nutch-1.7/crawl/linkdb/parse_text
at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:197)
at org.apache.hadoop.mapred.SequenceFileInputFormat.listStatus(SequenceFileInputFormat.java:40)
at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:208)
at org.apache.hadoop.mapred.JobClient.writeOldSplits(JobClient.java:1081)
at org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1073)
at org.apache.hadoop.mapred.JobClient.access$700(JobClient.java:179)
at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:983)
at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:936)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:936)
at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:910)
at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1353)
at org.apache.nutch.indexer.IndexingJob.index(IndexingJob.java:123)
at org.apache.nutch.indexer.IndexingJob.run(IndexingJob.java:185)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
at org.apache.nutch.indexer.IndexingJob.main(IndexingJob.java:195)

我在谷歌搜索后也尝试了另一个命令:

bin/nutch solrindex http://solr:8181/solr/ crawl/crawldb -linkdb crawl/linkdb crawl/segments/*

使用此输出:

Indexer: starting at 2013-09-12 10:45:51
Indexer: deleting gone documents: false
Indexer: URL filtering: false
Indexer: URL normalizing: false
Active IndexWriters :
SOLRIndexWriter
solr.server.url : URL of the SOLR instance (mandatory)
solr.commit.size : buffer size when sending to SOLR (default 1000)
solr.mapping.file : name of the mapping file for fields (default solrindex-mapping.xml)
solr.auth : use authentication (default false)
solr.auth.username : use authentication (default false)
solr.auth : username for authentication
solr.auth.password : password for authentication


Indexer: java.io.IOException: Job failed!
at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1357)
at org.apache.nutch.indexer.IndexingJob.index(IndexingJob.java:123)
at org.apache.nutch.indexer.IndexingJob.run(IndexingJob.java:185)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
at org.apache.nutch.indexer.IndexingJob.main(IndexingJob.java:195)

有没有人对如何克服这些错误有任何想法?

2 个答案:

答案 0 :(得分:1)

期待在新Solr 5.2.1和Nutch 1.10上出现相同的错误:

2015-07-30 20:56:23,015 WARN mapred.LocalJobRunner - job_local_0001 org.apache.solr.common.SolrException:Not Found

未找到

请求:http://127.0.0.1:8983/solr/update?wt=javabin&version=2

所以我创建了一个集合(或核心,我不是SOLR的专家):

  

bin / solr create -c demo

并在Nutch索引脚本中更改了URL:

  

bin / nutch solrindex http://127.0.0.1:8983/solr/demo crawl / crawldb -linkdb crawl / linkdb crawl / segments / *

我知道这个问题相当陈旧,但也许我会帮助有人...

答案 1 :(得分:0)

您是否看到了显示错误原因的登录solr?我在nutch中遇到过同样的问题,solr的日志显示了一条消息“未知字段'主机'”。在修改了solr的schema.xml之后,问题就消失了。