Nutch fetch命令没有获取数据

时间:2016-01-08 13:14:05

标签: hadoop hbase nutch

我的群集设置包含以下软件堆栈:

的nutch分支-2.3.1, gora-hbase 0.6.1 Hadoop 2.5.2, HBase的-0.98.8-hadoop2

所以初始命令是:inject,generate,fetch,parse,updatedb 其中前2个即注入,生成工作正常,但是对于nutch命令(即使它成功执行)它没有获取任何数据,并且因为获取进程失败,其后续进程也会失败。

请查找每个流程的计数器日志:

注册作业:

2016-01-08 14:12:45,649 INFO  [main] mapreduce.Job: Counters: 31
    File System Counters
        FILE: Number of bytes read=0
        FILE: Number of bytes written=114853
        FILE: Number of read operations=0
        FILE: Number of large read operations=0
        FILE: Number of write operations=0
        HDFS: Number of bytes read=836443
        HDFS: Number of bytes written=0
        HDFS: Number of read operations=2
        HDFS: Number of large read operations=0
        HDFS: Number of write operations=0
    Job Counters 
        Launched map tasks=1
        Data-local map tasks=1
        Total time spent by all maps in occupied slots (ms)=179217
        Total time spent by all reduces in occupied slots (ms)=0
        Total time spent by all map tasks (ms)=59739
        Total vcore-seconds taken by all map tasks=59739
        Total megabyte-seconds taken by all map tasks=183518208
    Map-Reduce Framework
        Map input records=29973
        Map output records=29973
        Input split bytes=94
        Spilled Records=0
        Failed Shuffles=0
        Merged Map outputs=0
        GC time elapsed (ms)=318
        CPU time spent (ms)=24980
        Physical memory (bytes) snapshot=427704320
        Virtual memory (bytes) snapshot=5077356544
        Total committed heap usage (bytes)=328728576
    injector
        urls_injected=29973
    File Input Format Counters 
        Bytes Read=836349
    File Output Format Counters 
        Bytes Written=0

生成职位

2016-01-08 14:14:38,257 INFO  [main] mapreduce.Job: Counters: 50
    File System Counters
        FILE: Number of bytes read=137140
        FILE: Number of bytes written=623942
        FILE: Number of read operations=0
        FILE: Number of large read operations=0
        FILE: Number of write operations=0
        HDFS: Number of bytes read=937
        HDFS: Number of bytes written=0
        HDFS: Number of read operations=1
        HDFS: Number of large read operations=0
        HDFS: Number of write operations=0
    Job Counters 
        Launched map tasks=1
        Launched reduce tasks=2
        Data-local map tasks=1
        Total time spent by all maps in occupied slots (ms)=43788
        Total time spent by all reduces in occupied slots (ms)=305690
        Total time spent by all map tasks (ms)=14596
        Total time spent by all reduce tasks (ms)=61138
        Total vcore-seconds taken by all map tasks=14596
        Total vcore-seconds taken by all reduce tasks=61138
        Total megabyte-seconds taken by all map tasks=44838912
        Total megabyte-seconds taken by all reduce tasks=313026560
    Map-Reduce Framework
        Map input records=14345
        Map output records=14342
        Map output bytes=1261921
        Map output materialized bytes=137124
        Input split bytes=937
        Combine input records=0
        Combine output records=0
        Reduce input groups=14342
        Reduce shuffle bytes=137124
        Reduce input records=14342
        Reduce output records=14342
        Spilled Records=28684
        Shuffled Maps =2
        Failed Shuffles=0
        Merged Map outputs=2
        GC time elapsed (ms)=1299
        CPU time spent (ms)=39600
        Physical memory (bytes) snapshot=2060779520
        Virtual memory (bytes) snapshot=15215738880
        Total committed heap usage (bytes)=1864892416
    Generator
        GENERATE_MARK=14342
    Shuffle Errors
        BAD_ID=0
        CONNECTION=0
        IO_ERROR=0
        WRONG_LENGTH=0
        WRONG_MAP=0
        WRONG_REDUCE=0
    File Input Format Counters 
        Bytes Read=0
    File Output Format Counters 
        Bytes Written=0
2016-01-08 14:14:38,429 INFO  [main] crawl.GeneratorJob: GeneratorJob: finished at 2016-01-08 14:14:38, time elapsed: 00:01:47
2016-01-08 14:14:38,431 INFO  [main] crawl.GeneratorJob: GeneratorJob: generated batch id: 1452242570-1295749106 containing 14342 URLs

提取:

../nutch fetch -D mapred.reduce.tasks=2 -D mapred.child.java.opts=-Xmx1000m -D mapred.reduce.tasks.speculative.execution=false -D mapred.map.tasks.speculative.execution=false -D mapred.compress.map.output=true -D fetcher.timelimit.mins=180 1452242566-14060 -crawlId 1 -threads 50


2016-01-08 14:14:43,142 INFO  [main] fetcher.FetcherJob: FetcherJob: starting at 2016-01-08 14:14:43
2016-01-08 14:14:43,145 INFO  [main] fetcher.FetcherJob: FetcherJob: batchId: 1452242566-14060
2016-01-08 14:15:53,837 INFO  [main] mapreduce.Job: Job job_1452239500353_0024 completed successfully
2016-01-08 14:15:54,286 INFO  [main] mapreduce.Job: Counters: 50
    File System Counters
        FILE: Number of bytes read=44
        FILE: Number of bytes written=349279
        FILE: Number of read operations=0
        FILE: Number of large read operations=0
        FILE: Number of write operations=0
        HDFS: Number of bytes read=1087
        HDFS: Number of bytes written=0
        HDFS: Number of read operations=1
        HDFS: Number of large read operations=0
        HDFS: Number of write operations=0
    Job Counters 
        Launched map tasks=1
        Launched reduce tasks=2
        Data-local map tasks=1
        Total time spent by all maps in occupied slots (ms)=30528
        Total time spent by all reduces in occupied slots (ms)=136535
        Total time spent by all map tasks (ms)=10176
        Total time spent by all reduce tasks (ms)=27307
        Total vcore-seconds taken by all map tasks=10176
        Total vcore-seconds taken by all reduce tasks=27307
        Total megabyte-seconds taken by all map tasks=31260672
        Total megabyte-seconds taken by all reduce tasks=139811840
    Map-Reduce Framework
        Map input records=0
        Map output records=0
        Map output bytes=0
        Map output materialized bytes=28
        Input split bytes=1087
        Combine input records=0
        Combine output records=0
        Reduce input groups=0
        Reduce shuffle bytes=28
        Reduce input records=0
        Reduce output records=0
        Spilled Records=0
        Shuffled Maps =2
        Failed Shuffles=0
        Merged Map outputs=2
        GC time elapsed (ms)=426
        CPU time spent (ms)=11140
        Physical memory (bytes) snapshot=1884893184
        Virtual memory (bytes) snapshot=15245959168
        Total committed heap usage (bytes)=1751646208
    FetcherStatus
        HitByTimeLimit-QueueFeeder=0
    Shuffle Errors
        BAD_ID=0
        CONNECTION=0
        IO_ERROR=0
        WRONG_LENGTH=0
        WRONG_MAP=0
        WRONG_REDUCE=0
    File Input Format Counters 
        Bytes Read=0
    File Output Format Counters 
        Bytes Written=0
2016-01-08 14:15:54,314 INFO  [main] fetcher.FetcherJob: FetcherJob: finished at 2016-01-08 14:15:54, time elapsed: 00:01:11

请告知。

2 个答案:

答案 0 :(得分:1)

自从我使用nutch以来已经有一段时间了,但是从记忆中有时间可以获取页面。例如,如果你今天抓取http://helloworld.com,并尝试今天再次发出fetch命令,那么它可能会在没有抓取任何内容的情况下完成,因为url http://helloworld.com上的timetolive是由x天数决定的(忘记了默认的生活时间。)

我认为您可以通过清除crawl_db并再次尝试来解决此问题 - 或者现在可能有一个命令将timetolive设置为0.

答案 1 :(得分:0)

最后几个小时后,我喜欢这个问题是因为nutch中的一个错误,就像“通过选项/参数-batchId <id>传递给GeneratorJob的批处理ID被忽略,并使用生成的批处理ID标记当前批次。“在此列为问题https://issues.apache.org/jira/browse/NUTCH-2143

特别感谢andrew-butkus:)