我正在使用stormcrawler来抓取40k网站,max_depth = 2,我想尽可能快地进行抓取。 我有5个风暴节点(具有不同的静态ips)和3个弹性节点。 目前我最好的拓扑结构是:
spouts:
- id: "spout"
className: "com.digitalpebble.stormcrawler.elasticsearch.persistence.CollapsingSpout"
parallelism: 10
bolts:
- id: "partitioner"
className: "com.digitalpebble.stormcrawler.bolt.URLPartitionerBolt"
parallelism: 1
- id: "fetcher"
className: "com.digitalpebble.stormcrawler.bolt.FetcherBolt"
parallelism: 5
- id: "sitemap"
className: "com.digitalpebble.stormcrawler.bolt.SiteMapParserBolt"
parallelism: 5
- id: "parse"
className: "com.digitalpebble.stormcrawler.bolt.JSoupParserBolt"
parallelism: 100
- id: "index"
className: "com.digitalpebble.stormcrawler.elasticsearch.bolt.IndexerBolt"
parallelism: 25
- id: "status"
className: "com.digitalpebble.stormcrawler.elasticsearch.persistence.StatusUpdaterBolt"
parallelism: 25
- id: "status_metrics"
className: "com.digitalpebble.stormcrawler.elasticsearch.metrics.StatusMetricsBolt"
parallelism: 5
和抓取工具配置:
config:
topology.workers: 5
topology.message.timeout.secs: 300
topology.max.spout.pending: 250
topology.debug: false
fetcher.threads.number: 500
worker.heap.memory.mb: 4096
问题: 1)我应该使用AggreationsSpout还是CollapsingSpout,有什么区别?我尝试了AggregationSpout,但性能等于默认配置的1台机器的性能。
2)这种并行性的配置是否正确?
3)当我从1节点跳到5节点配置时,我发现“FETCH ERROR”增加了大约20%并且很多站点没有正确获取。可能是什么原因?
更新
ES-conf.yaml:
# configuration for Elasticsearch resources
config:
# ES indexer bolt
# adresses can be specified as a full URL
# if not we assume that the protocol is http and the port 9200
es.indexer.addresses: "1.1.1.1"
es.indexer.index.name: "index"
es.indexer.doc.type: "doc"
es.indexer.create: false
es.indexer.settings:
cluster.name: "webcrawler-cluster"
# ES metricsConsumer
es.metrics.addresses: "http://1.1.1.1:9200"
es.metrics.index.name: "metrics"
es.metrics.doc.type: "datapoint"
es.metrics.settings:
cluster.name: "webcrawler-cluster"
# ES spout and persistence bolt
es.status.addresses: "http://1.1.1.1:9200"
es.status.index.name: "status"
es.status.doc.type: "status"
#es.status.user: "USERNAME"
#es.status.password: "PASSWORD"
# the routing is done on the value of 'partition.url.mode'
es.status.routing: true
# stores the value used for the routing as a separate field
# needed by the spout implementations
es.status.routing.fieldname: "metadata.hostname"
es.status.bulkActions: 500
es.status.flushInterval: "5s"
es.status.concurrentRequests: 1
es.status.settings:
cluster.name: "webcrawler-cluster"
################
# spout config #
################
# positive or negative filter parsable by the Lucene Query Parser
# es.status.filterQuery: "-(metadata.hostname:stormcrawler.net)"
# time in secs for which the URLs will be considered for fetching after a ack of fail
es.status.ttl.purgatory: 30
# Min time (in msecs) to allow between 2 successive queries to ES
es.status.min.delay.queries: 2000
es.status.max.buckets: 50
es.status.max.urls.per.bucket: 2
# field to group the URLs into buckets
es.status.bucket.field: "metadata.hostname"
# field to sort the URLs within a bucket
es.status.bucket.sort.field: "nextFetchDate"
# field to sort the buckets
es.status.global.sort.field: "nextFetchDate"
# Delay since previous query date (in secs) after which the nextFetchDate value will be reset
es.status.reset.fetchdate.after: -1
# CollapsingSpout : limits the deep paging by resetting the start offset for the ES query
es.status.max.start.offset: 500
# AggregationSpout : sampling improves the performance on large crawls
es.status.sample: false
# AggregationSpout (expert): adds this value in mins to the latest date returned in the results and
# use it as nextFetchDate
es.status.recentDate.increase: -1
es.status.recentDate.min.gap: -1
topology.metrics.consumer.register:
- class: "com.digitalpebble.stormcrawler.elasticsearch.metrics.MetricsConsumer"
parallelism.hint: 1
#whitelist:
# - "fetcher_counter"
# - "fetcher_average.bytes_fetched"
#blacklist:
# - "__receive.*"
答案 0 :(得分:1)
1)我应该使用AggreationsSpout还是CollapsingSpout,它是什么 区别?我尝试过AggregationSpout,但性能等于 具有默认配置的1台机器的性能。
顾名思义,AggregationSpout使用聚合作为按主机(或域或IP或其他)对URL进行分组的机制,而CollapsingSpout使用collapsing。如果您将每个存储桶配置为多于1个URL( es.status.max.urls.per.bucket ),则后者可能会更慢,因为它会为每个存储桶发出子查询。 AggregationSpout应该具有良好的性能,尤其是当 es.status.sample 设置为true时。在此阶段,CollapsingSpouts是实验性的。
2)这种并行配置是否正确?
这可能比需要的更多JSoupParserBolts。在实践中,与Fetcherbolts相比,1:4的比例即使有500个取出线也很好。 Storm UI可用于发现瓶颈以及哪些组件需要扩展。其他一切看起来还不错,但实际上,您应该查看Storm UI和指标,将拓扑调整为爬行的最佳设置。
3)我发现" FETCH ERROR"增加了约20%,许多网站没有 当我从1个节点跳到5个节点配置时,正确获取。 可能是什么原因?
这可能表明您正在使网络连接饱和,但在使用更多节点时则不应该这样。也许请查看Storm UI如何在节点之间分配FetcherBolts。是一个工作人员运行所有实例还是他们都得到相同的数字?查看日志以查看发生的情况,例如是否有超载异常?