md5 id的UnicodeDecodeError批量导入数据到elasticsearch

时间:2018-01-06 11:47:55

标签: python elasticsearch unicode

我编写了一个简单的python脚本,使用bulk API将数据导入elasticsearch。

# -*- encoding: utf-8 -*-
import csv
import datetime
import hashlib
from elasticsearch import Elasticsearch
from elasticsearch.helpers import bulk
from dateutil.relativedelta import relativedelta


ORIGINAL_FORMAT = '%y-%m-%d %H:%M:%S'
INDEX_PREFIX = 'my-log'
INDEX_DATE_FORMAT = '%Y-%m-%d'
FILE_ADDR = '/media/zeinab/ZiZi/Elastic/python/elastic-test/elasticsearch-import-data/sample_data/sample.csv'


def set_data(input_file):
    with open(input_file) as csvfile:
        reader = csv.DictReader(csvfile)
        for row in reader:
            sendtime = datetime.datetime.strptime(row['sendTime'].split('.')[0], ORIGINAL_FORMAT)

            yield {
                "_index": '{0}-{1}_{2}'.format(
                                        INDEX_PREFIX,
                                        sendtime.replace(day=1).strftime(INDEX_DATE_FORMAT),
                                        (sendtime.replace(day=1) + relativedelta(months=1)).strftime(INDEX_DATE_FORMAT)),
                "_type": 'data',
                '_id': hashlib.md5("{0}{1}{2}{3}{4}".format(sendtime, row['IMSI'], row['MSISDN'], int(row['ruleRef']), int(row['sponsorRef']))).digest(),
                "_source": {
                    'body': {
                        'status': int(row['status']),
                        'sendTime': sendtime
                    }
                }
            }


if __name__ == "__main__":
    es = Elasticsearch(['http://{0}:{1}'.format('my.host.ip.addr', 9200)])
    es.indices.delete(index='*')
    success, _ = bulk(es, set_data(FILE_ADDR))

This comment帮助我编写/使用set_data方法。

不幸的是我得到了这个例外:

/usr/bin/python2.7 /media/zeinab/ZiZi/Elastic/python/elastic-test/elasticsearch-import-data/import_bulk_data.py
Traceback (most recent call last):
  File "/media/zeinab/ZiZi/Elastic/python/elastic-test/elasticsearch-import-data/import_bulk_data.py", line 59, in <module>
    success, _ = bulk(es, set_data(source_file))
  File "/usr/local/lib/python2.7/dist-packages/elasticsearch/helpers/__init__.py", line 257, in bulk
    for ok, item in streaming_bulk(client, actions, **kwargs):
  File "/usr/local/lib/python2.7/dist-packages/elasticsearch/helpers/__init__.py", line 180, in streaming_bulk
    client.transport.serializer):
  File "/usr/local/lib/python2.7/dist-packages/elasticsearch/helpers/__init__.py", line 60, in _chunk_actions
    action = serializer.dumps(action)
  File "/usr/local/lib/python2.7/dist-packages/elasticsearch/serializer.py", line 50, in dumps
    raise SerializationError(data, e)
elasticsearch.exceptions.SerializationError: ({u'index': {u'_type': 'data', u'_id': '8\x1dI\xa2\xe9\xa2H-\xa6\x0f\xbd=\xa7CY\xa3', u'_index': 'my-log-2017-04-01_2017-05-01'}}, UnicodeDecodeError('utf8', '8\x1dI\xa2\xe9\xa2H-\xa6\x0f\xbd=\xa7CY\xa3', 3, 4, 'invalid start byte'))

Process finished with exit code 1

我可以使用index API成功将此数据插入elasticsearch:

es.index(index='{0}-{1}_{2}'.format(
    INDEX_PREFIX,
    sendtime.replace(day=1).strftime(INDEX_DATE_FORMAT),
    (sendtime.replace(day=1) + relativedelta(months=1)).strftime(INDEX_DATE_FORMAT)
),
         doc_type='data',
         id=hashlib.md5("{0}{1}{2}{3}{4}".format(sendtime, row['IMSI'], row['MSISDN'], int(row['ruleRef']), int(row['sponsorRef']))).digest(),
         body={
                'status': int(row['status']),
                'sendTime': sendtime
            }
         )

index API的问题在于它非常慢;导入50条记录大约需要2秒钟。我希望bulk API可以帮助我提高速度。

1 个答案:

答案 0 :(得分:1)

根据hashlib documentationdigest方法将

  

返回到目前为止传递给update()方法的数据的摘要。这是一个大小为digest_size的字节对象,它可能包含整个0到255范围内的字节。

因此生成的字节可能无法解码为unicode。

>>> id_ = hashlib.md5('abc'.encode('utf-8')).digest()
>>> id_
b'\x90\x01P\x98<\xd2O\xb0\xd6\x96?}(\xe1\x7fr'
>>> id_.decode('utf-8')
Traceback (most recent call last):
  File "<console>", line 1, in <module>
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x90 in position 0: invalid start byte

hexdigest方法将生成一个字符串作为输出;来自docs

  

与digest()类似,但摘要作为双倍长度的字符串对象返回,仅包含十六进制数字。这可用于在电子邮件或其他非二进制环境中安全地交换值。

>>> id_ = hashlib.md5('abc'.encode('utf-8')).hexdigest()
>>> id_
'900150983cd24fb0d6963f7d28e17f72'