我试图将1600万个docs(47gb)从mysql表索引到elasticsearch索引。我正在使用jparante's elasticsearch jdbc river来执行此操作。但是,在创建了河流并等待大约15分钟后,整个堆内存被消耗掉而没有任何河流运行或文档被索引的迹象。当我有大约10到12百万条记录要编入索引时,这条河曾经运行良好。我尝试过3-4次这条河,但徒劳无功。
Heap Memory pre allocated to the ES process = 10g
elasticsearch.yml
cluster.name: test_cluster
index.cache.field.type: soft
index.cache.field.max_size: 50000
index.cache.field.expire: 2h
cloud.aws.access_key: BBNYJC25Dij8JO7YM23I(fake)
cloud.aws.secret_key: GqE6y009ZnkO/+D1KKzd6M5Mrl9/tIN2zc/acEzY(fake)
cloud.aws.region: us-west-1
discovery.type: ec2
discovery.ec2.groups: sg-s3s3c2fc(fake)
discovery.ec2.any_group: false
discovery.zen.ping.timeout: 3m
gateway.recover_after_nodes: 1
gateway.recover_after_time: 1m
bootstrap.mlockall: true
network.host: 10.111.222.33(fake)
river.sh
curl -XPUT 'http://--address--:9200/_river/myriver/_meta' -d '{
"type" : "jdbc",
"jdbc" : {
"driver" : "com.mysql.jdbc.Driver",
"url" : "jdbc:mysql://--address--:3306/mydatabase",
"user" : "USER",
"password" : "PASSWORD",
"sql" : "select * from mytable order by creation_time desc",
"poll" : "5d",
"versioning" : false
},
"index" : {
"index" : "myindex",
"type" : "mytype",
"bulk_size" : 500,
"bulk_timeout" : "240s"
}
}'
系统属性:
16gb RAM
200gb disk space
答案 0 :(得分:0)
根据您的elasticsearch-river-jdbc版本(找出ls -lrt plugins/river-jdbc/
),此错误可能会被关闭(https://github.com/jprante/elasticsearch-river-jdbc/issues/45)
否则在Github上提交错误报告。