如何使用Python API在Elastic Search中批量索引

时间:2015-02-05 22:24:09

标签: python mongodb elasticsearch

我正在尝试使用Python API将大量文档批量插入弹性搜索。

import elasticsearch
from pymongo import MongoClient

es = elasticsearch.Elasticsearch()

def index_collection(db, collection, fields, host='localhost', port=27017):
    conn = MongoClient(host, port)
    coll = conn[db][collection]
    cursor = coll.find({}, fields=fields, timeout=False)
    print "Starting Bulk index of {} documents".format(cursor.count())

    def action_gen():
        """
        Generator to use for bulk inserts
        """
        for n, doc in enumerate(cursor):

            op_dict = {
                '_index': db.lower(),
                '_type': collection,
                '_id': int('0x' + str(doc['_id']), 16),
            }
            doc.pop('_id')
            op_dict['_source'] = doc
            yield op_dict

    res = bulk(es, action_gen(), stats_only=True)
    print res

这些文件来自Mongodb集合,我在上面的函数中按照文档中解释的方式进行批量索引。

批量索引继续使用数千个空文档填充弹性搜索。谁能告诉我我做错了什么?

1 个答案:

答案 0 :(得分:2)

我从未见过以这种方式汇总的批量数据,特别是您使用"_source"所做的事情。可能有办法让它工作,我不知道副手,但当我尝试它时,我得到了奇怪的结果。

如果查看bulk api,ES需要一个元数据文档,那么要编制索引的文档。因此,您需要在批量数据列表中为每个文档添加两个条目。所以可能是这样的:

import elasticsearch
from pymongo import MongoClient

es = elasticsearch.Elasticsearch()

def index_collection(db, collection, fields, host='localhost', port=27017):
    conn = MongoClient(host, port)
    coll = conn[db][collection]
    cursor = coll.find({}, fields=fields, timeout=False)
    print "Starting Bulk index of {} documents".format(cursor.count())

    bulk_data = []

    for n, doc in enumerate(cursor):

        bulk_data.append({
            '_index': db.lower(),
            '_type': collection,
            '_id': int('0x' + str(doc['_id']), 16),
        })
        bulk_data.append(doc)

    es.bulk(index=index_name,body=bulk_data,refresh=True)
但是,我并没有尝试运行该代码。这是一个我知道有效的剧本,你可以玩,如果它有帮助:

from elasticsearch import Elasticsearch

es_client = Elasticsearch(hosts = [{ "host" : "localhost", "port" : 9200 }])

index_name = "test_index"

if es_client.indices.exists(index_name):
    print("deleting '%s' index..." % (index_name))
    print(es_client.indices.delete(index = index_name, ignore=[400, 404]))

print("creating '%s' index..." % (index_name))
print(es_client.indices.create(index = index_name))

bulk_data = []

for i in range(4):
    bulk_data.append({
        "index": {
            "_index": index_name, 
            "_type": 'doc', 
            "_id": i
        }
    })
    bulk_data.append({ "idx": i })

print("bulk indexing...")
res = es_client.bulk(index=index_name,body=bulk_data,refresh=True)
print(res)

print("results:")
for doc in es_client.search(index=index_name)['hits']['hits']:
    print(doc)