StormCrawler的原型拓扑不会提取外链

时间:2018-04-05 12:15:53

标签: web-crawler apache-storm stormcrawler

根据我的理解,基本示例应该能够抓取并获取页面。

我按照http://stormcrawler.net/getting-started/上的示例进行了操作,但抓取工具似乎只获取了几页,然后又没有做任何事情。

我想抓取http://books.toscrape.com/并运行抓取但是在日志中看到只提取了第一页而其他一些被发现但未被抓取:

8010 [Thread-34-parse-executor[5 5]] INFO  c.d.s.b.JSoupParserBolt - Parsing : starting http://books.toscrape.com/
8214 [Thread-34-parse-executor[5 5]] INFO  c.d.s.b.JSoupParserBolt - Parsed http://books.toscrape.com/ in 182 msec
content 1435 chars
url     http://books.toscrape.com/
domain  toscrape.com
description
title   All products | Books to Scrape - Sandbox
http://books.toscrape.com/catalogue/category/books/new-adult_20/index.html      DISCOVERED      Thu Apr 05 13:46:01 CEST 2018
        url.path: http://books.toscrape.com/
        depth: 1

http://books.toscrape.com/catalogue/the-dirty-little-secrets-of-getting-your-dream-job_994/index.html   DISCOVERED      Thu Apr 05 13:46:01 CEST 2018
        url.path: http://books.toscrape.com/
        depth: 1

http://books.toscrape.com/catalogue/category/books/thriller_37/index.html       DISCOVERED      Thu Apr 05 13:46:01 CEST 2018
        url.path: http://books.toscrape.com/
        depth: 1

http://books.toscrape.com/catalogue/category/books/academic_40/index.html       DISCOVERED      Thu Apr 05 13:46:01 CEST 2018
        url.path: http://books.toscrape.com/
        depth: 1

http://books.toscrape.com/catalogue/category/books/classics_6/index.html        DISCOVERED      Thu Apr 05 13:46:01 CEST 2018
        url.path: http://books.toscrape.com/
        depth: 1

http://books.toscrape.com/catalogue/category/books/paranormal_24/index.html     DISCOVERED      Thu Apr 05 13:46:01 CEST 2018
        url.path: http://books.toscrape.com/
        depth: 1



....




17131 [Thread-39] INFO  o.a.s.m.LoggingMetricsConsumer - 1522928770        172.18.25.22:1024      6:partitioner URLPartitioner           {}
17164 [Thread-39] INFO  o.a.s.m.LoggingMetricsConsumer - 1522928770        172.18.25.22:1024      8:spout       queue_size               0
17403 [Thread-39] INFO  o.a.s.m.LoggingMetricsConsumer - 1522928770        172.18.25.22:1024      5:parse       JSoupParserBolt          {tuple_success=1, outlink_kept=73}
17693 [Thread-39] INFO  o.a.s.m.LoggingMetricsConsumer - 1522928770        172.18.25.22:1024      3:fetcher     num_queues               0
17693 [Thread-39] INFO  o.a.s.m.LoggingMetricsConsumer - 1522928770        172.18.25.22:1024      3:fetcher     fetcher_average_perdoc   {time_in_queues=265.0, bytes_fetched=51294.0, fetch_time=52.0}
17693 [Thread-39] INFO  o.a.s.m.LoggingMetricsConsumer - 1522928770        172.18.25.22:1024      3:fetcher     fetcher_counter          {robots.fetched=1, bytes_fetched=51294, fetched=1}
17693 [Thread-39] INFO  o.a.s.m.LoggingMetricsConsumer - 1522928770        172.18.25.22:1024      3:fetcher     activethreads            0
17693 [Thread-39] INFO  o.a.s.m.LoggingMetricsConsumer - 1522928770        172.18.25.22:1024      3:fetcher     fetcher_average_persec   {bytes_fetched_perSec=5295.137813564571, fetched_perSec=0.10323113451016827}
17693 [Thread-39] INFO  o.a.s.m.LoggingMetricsConsumer - 1522928770        172.18.25.22:1024      3:fetcher     in_queues                0
27127 [Thread-39] INFO  o.a.s.m.LoggingMetricsConsumer - 1522928780        172.18.25.22:1024      6:partitioner URLPartitioner           {}
27168 [Thread-39] INFO  o.a.s.m.LoggingMetricsConsumer - 1522928780        172.18.25.22:1024      8:spout       queue_size               0
27405 [Thread-39] INFO  o.a.s.m.LoggingMetricsConsumer - 1522928780        172.18.25.22:1024      5:parse       JSoupParserBolt          {tuple_success=0, outlink_kept=0}
27695 [Thread-39] INFO  o.a.s.m.LoggingMetricsConsumer - 1522928780        172.18.25.22:1024      3:fetcher     num_queues               0
27695 [Thread-39] INFO  o.a.s.m.LoggingMetricsConsumer - 1522928780        172.18.25.22:1024      3:fetcher     fetcher_average_perdoc   {}
27695 [Thread-39] INFO  o.a.s.m.LoggingMetricsConsumer - 1522928780        172.18.25.22:1024      3:fetcher     fetcher_counter          {robots.fetched=0, bytes_fetched=0, fetched=0}
27695 [Thread-39] INFO  o.a.s.m.LoggingMetricsConsumer - 1522928780        172.18.25.22:1024      3:fetcher     activethreads            0
27696 [Thread-39] INFO  o.a.s.m.LoggingMetricsConsumer - 1522928780        172.18.25.22:1024      3:fetcher     fetcher_average_persec   {bytes_fetched_perSec=0.0, fetched_perSec=0.0}

没有更改配置文件。包括crawler-conf.yaml。 标志parser.emitOutlinks也应该为true,因为这是crawler-default.yaml

的默认值

在另一个项目中,我也关注了有关elasticsearch的youtube教程。在这里,我还遇到了一个问题,即根本没有页面被提取和索引。

抓取工具无法获取任何页面的错误可能在哪里?

1 个答案:

答案 0 :(得分:0)

artefact生成的拓扑仅仅是一个示例,它使用StdOutStatusUpdater,它只是将发现的URL转储到控制台。如果您以本地模式或单个工作程序运行,则可以使用MemoryStatusUpdater,因为它会将已发现的URL添加到MemorySpout,并且将依次处理这些URL。

请注意,在终止拓扑或拓扑崩溃时,这不会保留有关URL的信息。同样,这只是用于调试,也是StormCrawler的第一步。

如果要保留URL,可以使用任何持久性后端(SOLR / Elasticsearch,SQL)。请随意将您的ES问题描述为一个单独的问题。