Nutch抓取超时

时间:2017-02-27 18:32:14

标签: web-crawler nutch

我正在尝试使用 nutch-1.12 抓取某些网站,但抓取对种子列表中的某些网站无效:

http://www.nature.com/ (1)
https://www.theguardian.com/international (2)
http://www.geomar.de (3)

正如您在下面的日志中看到的那样(2)和(3)在获取时工作正常(1)导致超时,而链接本身在浏览器中正常工作。 由于我不想大幅增加等待时间和尝试,我想知道是否有另一种方法来确定为什么会产生超时以及如何解决它。

日志

Injector: starting at 2017-02-27 18:33:38
Injector: crawlDb: nature_crawl/crawldb
Injector: urlDir: urls-2
Injector: Converting injected urls to crawl db entries.
Injector: overwrite: false
Injector: update: false
Injector: Total urls rejected by filters: 0
Injector: Total urls injected after normalization and filtering: 3
Injector: Total urls injected but already in CrawlDb: 0
Injector: Total new urls injected: 3
Injector: finished at 2017-02-27 18:33:42, elapsed: 00:00:03
Generator: starting at 2017-02-27 18:33:45
Generator: Selecting best-scoring urls due for fetch.
Generator: filtering: true
Generator: normalizing: true
Generator: running in local mode, generating exactly one partition.
Generator: Partitioning selected urls for politeness.
Generator: segment: nature_crawl/segments/20170227183349
Generator: finished at 2017-02-27 18:33:51, elapsed: 00:00:05
Fetcher: starting at 2017-02-27 18:33:53
Fetcher: segment: nature_crawl/segments/20170227183349
Fetcher: threads: 3
Fetcher: time-out divisor: 2
QueueFeeder finished: total 3 records + hit by time limit :0
Using queue mode : byHost
Using queue mode : byHost
fetching https://www.theguardian.com/international (queue crawl delay=1000ms)
Using queue mode : byHost
fetching http://www.nature.com/ (queue crawl delay=1000ms)
Fetcher: throughput threshold: -1
Fetcher: throughput threshold retries: 5
fetching http://www.geomar.de/ (queue crawl delay=1000ms)
robots.txt whitelist not configured.
robots.txt whitelist not configured.
robots.txt whitelist not configured.
Thread FetcherThread has no more work available
-finishing thread FetcherThread, activeThreads=2
-activeThreads=2, spinWaiting=0, fetchQueues.totalSize=0, fetchQueues.getQueueCount=2
Thread FetcherThread has no more work available
-finishing thread FetcherThread, activeThreads=1
-activeThreads=1, spinWaiting=0, fetchQueues.totalSize=0, fetchQueues.getQueueCount=1
-activeThreads=1, spinWaiting=0, fetchQueues.totalSize=0, fetchQueues.getQueueCount=1
.
.
.
-activeThreads=1, spinWaiting=0, fetchQueues.totalSize=0, fetchQueues.getQueueCount=1
fetch of http://www.nature.com/ failed with: java.net.SocketTimeoutException: Read timed out
Thread FetcherThread has no more work available
-finishing thread FetcherThread, activeThreads=0
-activeThreads=0, spinWaiting=0, fetchQueues.totalSize=0, fetchQueues.getQueueCount=0
-activeThreads=0
Fetcher: finished at 2017-02-27 18:34:18, elapsed: 00:00:24
ParseSegment: starting at 2017-02-27 18:34:21
ParseSegment: segment: nature_crawl/segments/20170227183349
Parsed (507ms):http://www.geomar.de/
Parsed (344ms):https://www.theguardian.com/international
ParseSegment: finished at 2017-02-27 18:34:24, elapsed: 00:00:03
CrawlDb update: starting at 2017-02-27 18:34:26
CrawlDb update: db: nature_crawl/crawldb
CrawlDb update: segments: [nature_crawl/segments/20170227183349]
CrawlDb update: additions allowed: true
CrawlDb update: URL normalizing: false
CrawlDb update: URL filtering: false
CrawlDb update: 404 purging: false
CrawlDb update: Merging segment data into db.
CrawlDb update: finished at 2017-02-27 18:34:30, elapsed: 00:00:03

2 个答案:

答案 0 :(得分:1)

您可以尝试增加nutch-site.xml中的http超时设置

<property>
  <name>http.timeout</name>
  <value>30000</value>
  <description>The default network timeout, in milliseconds.</description>
</property>

否则,请检查该网站的robots.txt是否允许抓取该网页。

答案 1 :(得分:0)

不知道为什么但看起来像www.nature.com如果用户代理字符串包含&#34; Nutch&#34;则保持连接挂起。也可以使用wget重现:

wget -U 'my-test-crawler/Nutch-1.13-SNAPSHOT (mydotmailatexampledotcom)' -d http://www.nature.com/