从python客户端到elasticsearch的以下请求失败
2014-12-19 13:39:05,429 WARNING GET http://10.129.0.53:9200/delivery-logs-index.prod-20141218/_search?timeout=20m [status:N/A request:10.010s]
Traceback (most recent call last):
File "/usr/lib/python2.6/site-packages/elasticsearch/connection/http_urllib3.py", line 46, in perform_request
response = self.pool.urlopen(method, url, body, retries=False, headers=headers, **kw)
File "/usr/lib/python2.6/site-packages/urllib3/connectionpool.py", line 559, in urlopen
_pool=self, _stacktrace=stacktrace)
File "/usr/lib/python2.6/site-packages/urllib3/util/retry.py", line 223, in increment
raise six.reraise(type(error), error, _stacktrace)
File "/usr/lib/python2.6/site-packages/urllib3/connectionpool.py", line 516, in urlopen
body=body, headers=headers)
File "/usr/lib/python2.6/site-packages/urllib3/connectionpool.py", line 336, in _make_request
self, url, "Read timed out. (read timeout=%s)" % read_timeout)
ReadTimeoutError: HTTPConnectionPool(host=u'10.129.0.53', port=9200): Read timed out. (read timeout=10)
Elasticsearch([es_host],
sniff_on_start=True,
max_retries=100,
retry_on_timeout=True,
sniff_on_connection_fail=True,
sniff_timeout=1000)
有没有办法增加请求超时?目前,它似乎默认配置为读取超时= 10
答案 0 :(得分:2)
您可以尝试将request_timeout
添加到请求中的值,例如:
res = client.search(index=blabla, search_type="count", timeout="20m", request_timeout="10000", body={
答案 1 :(得分:2)
您还可以在实例化客户端对象时传递timeout=60
(60表示60秒,当然只是一个示例)。
此参数将覆盖Connection
构造函数中指定的10s默认值。
https://github.com/elastic/elasticsearch-py/blob/master/elasticsearch/connection/base.py#L27