我正在尝试使用Nutch抓取多个网站。我的seed.txt看起来像这样:
http://1.a.b/
http://2.a.b/
和我的regex-urlfilter.txt看起来像这样:
# skip file: ftp: and mailto: urls
-^(file|ftp|mailto):
# skip image and other suffixes we can't yet parse
# for a more extensive coverage use the urlfilter-suffix plugin
-\.(gif|GIF|jpg|JPG|png|PNG|ico|ICO|css|CSS|sit|SIT|eps|EPS|wmf|WMF|zip|ZIP|ppt|PPT|mpg|MPG|xls|XLS|gz|GZ|rpm|RPM|tgz|TGZ|mov|MOV|exe|EXE|jpeg|JPEG|bmp|BMP|js|JS)$
# skip URLs containing certain characters as probable queries, etc.
-[?*!@=]
# skip URLs with slash-delimited segment that repeats 3+ times, to break loops
-.*(/[^/]+)/[^/]+\1/[^/]+\1/
# accept anything else
#+.
+^http://1.a.b/*
+^http://2.a.b/*
我在最后一部分尝试了以下内容:
+^http://([a-z0-9]*\.)*a.b/*
唯一抓取的网站是第一个。所有其他配置都是默认配置。
我运行以下命令:
bin/nutch crawl urls -solr http://localhost:8984/solr/ -dir crawl -depth 10 -topN 10
有什么想法吗?!
谢谢!
答案 0 :(得分:1)
在regex-urlfilter.txt中尝试:
旧设置:
# accept anything else
#+.
+^http://1.a.b/*
+^http://2.a.b/*
New Sertting:
# accept anything else
+.