获取错误'ascii'编解码器无法解码位置149中的字节0xc3:重建干草堆索引时序号不在范围(128)'

时间:2016-04-25 16:30:12

标签: django python-2.7 unicode django-haystack amazon-elasticsearch

我有一个应用程序,我必须存储人们的名字,并使他们可搜索。我使用的技术是python(v2.7.6)django(v1.9.5)rest framwork。 dbms是postgresql(v9.2)。由于用户名可以是阿拉伯语,因此我们使用utf-8作为db编码。对于搜索,我们使用haystack(v2.4.1)与Amazon Elastic Search进行索引。该指数几天前正在建设中,但现在当我尝试用

重建它时
python manage.py rebuild_index

失败并出现以下错误

'ascii' codec can't decode byte 0xc3 in position 149: ordinal not in range(128)

完整的错误跟踪是

  File "/usr/local/lib/python2.7/dist-packages/haystack/management/commands/update_index.py", line 188, in handle_label
    self.update_backend(label, using)
  File "/usr/local/lib/python2.7/dist-packages/haystack/management/commands/update_index.py", line 233, in update_backend
    do_update(backend, index, qs, start, end, total, verbosity=self.verbosity, commit=self.commit)
  File "/usr/local/lib/python2.7/dist-packages/haystack/management/commands/update_index.py", line 96, in do_update
    backend.update(index, current_qs, commit=commit)
  File "/usr/local/lib/python2.7/dist-packages/haystack/backends/elasticsearch_backend.py", line 193, in update
    bulk(self.conn, prepped_docs, index=self.index_name, doc_type='modelresult')
  File "/usr/local/lib/python2.7/dist-packages/elasticsearch/helpers/__init__.py", line 188, in bulk
    for ok, item in streaming_bulk(client, actions, **kwargs):
  File "/usr/local/lib/python2.7/dist-packages/elasticsearch/helpers/__init__.py", line 160, in streaming_bulk
    for result in _process_bulk_chunk(client, bulk_actions, raise_on_exception, raise_on_error, **kwargs):
  File "/usr/local/lib/python2.7/dist-packages/elasticsearch/helpers/__init__.py", line 85, in _process_bulk_chunk
    resp = client.bulk('\n'.join(bulk_actions) + '\n', **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/elasticsearch/client/utils.py", line 69, in _wrapped
    return func(*args, params=params, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/elasticsearch/client/__init__.py", line 795, in bulk
    doc_type, '_bulk'), params=params, body=self._bulk_body(body))
  File "/usr/local/lib/python2.7/dist-packages/elasticsearch/transport.py", line 329, in perform_request
    status, headers, data = connection.perform_request(method, url, params, body, ignore=ignore, timeout=timeout)
  File "/usr/local/lib/python2.7/dist-packages/elasticsearch/connection/http_requests.py", line 68, in perform_request
    response = self.session.request(method, url, data=body, timeout=timeout or self.timeout)
  File "/usr/lib/python2.7/dist-packages/requests/sessions.py", line 455, in request
    resp = self.send(prep, **send_kwargs)
  File "/usr/lib/python2.7/dist-packages/requests/sessions.py", line 558, in send
    r = adapter.send(request, **kwargs)
  File "/usr/lib/python2.7/dist-packages/requests/adapters.py", line 330, in send
    timeout=timeout
  File "/usr/local/lib/python2.7/dist-packages/urllib3/connectionpool.py", line 558, in urlopen
    body=body, headers=headers)
  File "/usr/local/lib/python2.7/dist-packages/urllib3/connectionpool.py", line 353, in _make_request
    conn.request(method, url, **httplib_request_kw)
  File "/usr/lib/python2.7/httplib.py", line 979, in request
    self._send_request(method, url, body, headers)
  File "/usr/lib/python2.7/httplib.py", line 1013, in _send_request
    self.endheaders(body)
  File "/usr/lib/python2.7/httplib.py", line 975, in endheaders
    self._send_output(message_body)
  File "/usr/lib/python2.7/httplib.py", line 833, in _send_output
    msg += message_body
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 149: ordinal not in range(128)

我的猜测是因为我们的数据库中没有阿拉伯字符,因此索引构建正常,但现在由于用户输入了阿拉伯语字符,索引无法构建。

3 个答案:

答案 0 :(得分:0)

我怀疑你现在出现在数据库中的阿拉伯字符是正确的。

也可能与此问题有关。第一个链接似乎有一些工作,但没有很多细节。我怀疑作者的意思是

  

正确的解决方法是使用unicode类型而不是str或正确设置默认编码(我假设)utf-8。

您需要检查其运行的计算机是LANG=en_US.UTF-8还是至少某些UTF-8 LANG

答案 1 :(得分:0)

Elasticsearch支持不同的编码,因此阿拉伯字符不应该成为问题。

由于您使用的是AWS,我假设您还使用了requests-aws4auth之类的授权库。 如果是这种情况,请注意在授权期间,会添加一些unicode标头,例如u'x-amz-date'。这是一个问题,因为python的httplib在_send_output()期间执行以下操作:msg = "\r\n".join(self._buffer)其中_buffer是HTTP标头的列表。拥有unicode标头使msg属于<type 'unicode'>,而实际上应该是str类型(Here与不同的auth库类似的问题)。

引发异常的行msg += message_body引发它,因为python需要将message_body解码为unicode,因此它与msg的类型匹配。由于py-elasticsearch已经took care of the encoding,因此会出现异常,因此我们最终编码为unicode两次,这会导致异常(如here所述)。

您可能想尝试替换auth库(例如使用DavidMuller/aws-requests-auth)并查看它是否解决了问题。

答案 2 :(得分:0)

如果您使用的是requests-aws4auth包,则可以使用以下包装类代替AWS4Auth类。它将AWS4Auth创建的标头编码为字节字符串,从而避免UnicodeDecodeError下游。

from requests_aws4auth import AWS4Auth

class AWS4AuthEncodingFix(AWS4Auth):
    def __call__(self, request):
        request = super(AWS4AuthEncodingFix, self).__call__(request)

        for header_name in request.headers:
            self._encode_header_to_utf8(request, header_name)

        return request

    def _encode_header_to_utf8(self, request, header_name):
        value = request.headers[header_name]

        if isinstance(value, unicode):
            value = value.encode('utf-8')

        if isinstance(header_name, unicode):
            del request.headers[header_name]
            header_name = header_name.encode('utf-8')

        request.headers[header_name] = value