我正在使用nutch 1.2。当我像这样运行crawl命令时:
bin/nutch crawl urls -dir crawl -depth 2 -topN 1000
Injector: starting at 2011-07-11 12:18:37
Injector: crawlDb: crawl/crawldb
Injector: urlDir: urls
Injector: Converting injected urls to crawl db entries.
Injector: Merging injected urls into crawl db.
Injector: finished at 2011-07-11 12:18:44, elapsed: 00:00:07
Generator: starting at 2011-07-11 12:18:45
Generator: Selecting best-scoring urls due for fetch.
Generator: filtering: true
Generator: normalizing: true
Generator: topN: 1000
Generator: jobtracker is 'local', generating exactly one partition.
Generator: 0 records selected for fetching, exiting ...
Stopping at depth=0 - no more URLs to fetch.
**No URLs to fetch - check your seed list and URL filters.**
crawl finished: crawl
问题在于它一直在抱怨:没有要提取的网址 - 请检查您的种子列表和网址过滤器。
我有一个要在nutch_root / urls / nutch文件下抓取的网址列表。我的crawl-urlfilter.txt也已设置。
为什么会抱怨我的网址列表和过滤器?从来没有这样做过。
这是我的crawl-urlfilter.txt
# skip file:, ftp:, & mailto: urls
-^(file|ftp|mailto):
# skip image and other suffixes we can't yet parse
-\.(gif|GIF|jpg|JPG|png|PNG|ico|ICO|css|sit|eps|wmf|zip|ppt|mpg|xls|gz|rpm|tgz|mov|MOV|exe|jpeg|JPEG|bmp|BMP)$
# skip URLs containing certain characters as probable queries, etc.
# skip URLs with slash-delimited segment that repeats 3+ times, to break loops
-.*(/[^/]+)/[^/]+\1/[^/]+\1/
# accept hosts in MY.DOMAIN.NAME
+^http://([a-z0-9]*\.)*152.111.1.87/
+^http://([a-z0-9]*\.)*152.111.1.88/
# skip everything else
-.
答案 0 :(得分:0)
您的网址过滤规则看起来很奇怪,我认为它们与有效网址不匹配,这样的情况应该更好吗?
+^http://152\.111\.1\.87/
+^http://152\.111\.1\.88/