elasticsearch python bulk api(elasticsearch-py)

时间:2014-10-02 11:35:50

标签: python elasticsearch pyelasticsearch elasticsearch-bulk-api

我对py-elasticsearch批量感到困惑 @Diolor解决方案有效 https://stackoverflow.com/questions/20288770/how-to-use-bulk-api-to-store-the-keywords-in-es-by-using-python,但我想使用简单的es.bulk()

我的代码:

from elasticsearch import Elasticsearch
es = Elasticsearch()
doc = '''\n {"host":"logsqa","path":"/logs","message":"test test","@timestamp":"2014-10-02T10:11:25.980256","tags":["multiline","mydate_0.005"]} \n'''
result = es.bulk(index="logstash-test", doc_type="test", body=doc)

错误是:

 No handlers could be found for logger "elasticsearch"
Traceback (most recent call last):
  File "./log-parser-perf.py", line 55, in <module>
    insertToES()
  File "./log-parser-perf.py", line 46, in insertToES
    res = es.bulk(index="logstash-test", doc_type="test", body=doc)
  File "/usr/local/lib/python2.7/dist-packages/elasticsearch-1.0.0-py2.7.egg/elasticsearch/client/utils.py", line 70, in _wrapped
    return func(*args, params=params, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/elasticsearch-1.0.0-py2.7.egg/elasticsearch/client/__init__.py", line 570, in bulk
    params=params, body=self._bulk_body(body))
  File "/usr/local/lib/python2.7/dist-packages/elasticsearch-1.0.0-py2.7.egg/elasticsearch/transport.py", line 274, in perform_request
    status, headers, data = connection.perform_request(method, url, params, body, ignore=ignore)
  File "/usr/local/lib/python2.7/dist-packages/elasticsearch-1.0.0-py2.7.egg/elasticsearch/connection/http_urllib3.py", line 57, in perform_request
    self._raise_error(response.status, raw_data)
  File "/usr/local/lib/python2.7/dist-packages/elasticsearch-1.0.0-py2.7.egg/elasticsearch/connection/base.py", line 83, in _raise_error
    raise HTTP_EXCEPTIONS.get(status_code, TransportError)(status_code, error_message, additional_info)
elasticsearch.exceptions.TransportError: TransportError(500, u'ActionRequestValidationException[Validation Failed: 1: no requests added;]')

POST调用的生成URL是

  

/ logstash-test / test / _bulk

和POST正文是:

  

{&#34;主机&#34;:&#34; logsqa&#34;&#34;路径&#34;:&#34; /日志&#34;&#34;消息&#34 ;: &#34;测试   测试&#34;&#34; @时间戳&#34;:&#34; 2014-10-02T10:11:25.980256&#34;&#34;标记&#34;:[&#34;多&#34 ;,&#34; mydate_0.005&#34;]}

所以我手工卷曲: 这种卷曲不起作用:

> curl -XPUT http://localhost:9200/logstash-test/test2/_bulk -d
> '{"host":"logsqa","path":"/logs","message":"test
> test","@timestamp":"2014-10-02T10:11:25.980256","tags":["multiline","mydate_0.005"]}
> '
>
> {"error":"ActionRequestValidationException[Validation Failed: 1: no requests added;]","status":500}

所以错误部分正常,但我确实期望elasticsearch.bulk()会正确管理输入参数。

pythonf函数是:

bulk(*args, **kwargs)
    :arg body: The operation definition and data (action-data pairs), as
        either a newline separated string, or a sequence of dicts to
        serialize (one per row).
    :arg index: Default index for items which don't provide one
    :arg doc_type: Default document type for items which don't provide one
        :arg consistency: Explicit write consistency setting for the operation
    :arg refresh: Refresh the index after performing the operation
    :arg routing: Specific routing value
    :arg replication: Explicitly set the replication type (default: sync)
    :arg timeout: Explicit operation timeout

2 个答案:

答案 0 :(得分:4)

如果有人正在尝试使用批量API并想知道格式应该是什么,那么这对我有用:

doc = [
    {
        'index':{
            '_index': index_name,
            '_id' : <some_id>,
            '_type':<doc_type>
        }
    },
    {
        'field_1': <value>,
        'field_2': <value>
    }
]

docs_as_string = json.dumps(doc[0]) + '\n' + json.dumps(doc[1]) + '\n'
client.bulk(body=docs_as_string)

答案 1 :(得分:1)

来自github上的@HonzaKral

https://github.com/elasticsearch/elasticsearch-py/issues/135

嗨sirkubax,

批量api(和所有其他人一样)非常密切地关注弹性搜索本身的批量api格式,所以主体必须是:

doc ='''{“index”:{}} \ n {“host”:“logsqa”,“path”:“/ logs”,“message”:“test test”,“@ timestamp”: “2014-10-02T10:11:25.980256”, “标签”:[ “多”, “mydate_0.005”]} \ N ''” 它的工作原理。或者它可以是这两个词的列表。

这是一个复杂而笨拙的格式,可以使用python,这就是为什么我尝试在elasticsearch.helpers.bulk(0)中创建一个更方便的方法来处理批量。它只接受文档的迭代器,将从中提取任何可选元数据(如_id,_type等)并构造(并执行)批量请求。有关可接受格式的更多信息,请参阅上面的streaming_bulk文档,该文档是一个以迭代方式处理流的帮助程序(从用户点开始一次一个,在后台以块的形式批处理)。

希望这有帮助。

0 - http://elasticsearch-py.readthedocs.org/en/master/helpers.html#elasticsearch.helpers.bulk