Nutch Crawler不会检索新闻文章内容

时间:2016-08-04 07:28:40

标签: web-crawler nutch

我试图从链接抓取新闻文章: -

Article 1

Article 2

但是我没有将文本从页面中获取到index(elasticsearch)中的content字段。

抓取的结果是: -

{
  "took": 2,
  "timed_out": false,
  "_shards": {
    "total": 5,
    "successful": 5,
    "failed": 0
  },
  "hits": {
    "total": 2,
    "max_score": 0.09492774,
    "hits": [
      {
        "_index": "news",
        "_type": "doc",
        "_id": "http://www.bloomberg.com/press-releases/2016-07-08/network-1-announces-settlement-of-patent-litigation-with-apple-inc",
        "_score": 0.09492774,
        "_source": {
          "tstamp": "2016-08-04T07:21:59.614Z",
          "segment": "20160804125156",
          "digest": "d583a81c0c4c7510f5c842ea3b557992",
          "host": "www.bloomberg.com",
          "boost": "1.0",
          "id": "http://www.bloomberg.com/press-releases/2016-07-08/network-1-announces-settlement-of-patent-litigation-with-apple-inc",
          "url": "http://www.bloomberg.com/press-releases/2016-07-08/network-1-announces-settlement-of-patent-litigation-with-apple-inc",
          "content": ""
        }
      },
      {
        "_index": "news",
        "_type": "doc",
        "_id": "http://www.bloomberg.com/press-releases/2016-07-05/apple-donate-life-america-bring-national-organ-donor-registration-to-iphone",
        "_score": 0.009845509,
        "_source": {
          "tstamp": "2016-08-04T07:22:05.708Z",
          "segment": "20160804125156",
          "digest": "2a94a32ffffd0e03647928755e055e30",
          "host": "www.bloomberg.com",
          "boost": "1.0",
          "id": "http://www.bloomberg.com/press-releases/2016-07-05/apple-donate-life-america-bring-national-organ-donor-registration-to-iphone",
          "url": "http://www.bloomberg.com/press-releases/2016-07-05/apple-donate-life-america-bring-national-organ-donor-registration-to-iphone",
          "content": ""
        }
      }
    ]
  }
}

我们可以注意到内容字段为空。我尝试过在nutch-site.txt中使用不同的选项。但结果仍然是一样的。请帮帮我。

2 个答案:

答案 0 :(得分:3)

不知道为什么荷兰人无法提取文章内容。但是我找到了使用Jsoup的解决方法。我开发了一个自定义的parse-filter插件,它解析整个doc并在解析器过滤器返回的ParseResult中设置解析文本。并通过替换parse-plugins.xml

中的parse-html插件来使用我的自定义解析过滤器

这将是: -

   document = Jsoup.parse(new String(content.getContent(),"UTF-8"),content.getUrl());
   parse = parseResult.get(content.getUrl());
   status = parse.getData().getStatus();
   title = document.title();
   parseData = new ParseData(status, title,parse.getData().getOutlinks(), parse.getData().getContentMeta(), parse.getData().getParseMeta());
   parseResult.put(content.getUrl(), new ParseText(document.body().text()), parseData);

答案 1 :(得分:1)

只是脱离上下文的答案,但尝试使用Apache ManifoldCF。它为弹性搜索提供内置连接器,并提供更好的记录历史记录,以确定数据未被编入索引的原因.ManifoldCF中的连接器部分允许您指定内容应在哪个字段中编入索引。这是一个很好的开源替代品,可以尝试一下。