在Python中标记泰语文本时的UnicodeDecodeError

时间:2018-03-20 12:54:01

标签: python-3.x tokenize python-unicode thai

我正在尝试使用Python中的deepcut来标记泰语文本,并且我收到了UnicodeDecodeError。

这就是我试过的

import deepcut

thai = 'ตัดคำได้ดีมาก'
result = deepcut.tokenize(thai)

预期产出:

[\['ตัดคำ','ได้','ดี','มาก'\]][1]

尝试:

for i in result:
  print(i.decode('utf-8'))

Error: UnicodeDecodeError: 'utf8' codec can't decode byte 0xe0 in position 0: unexpected end of data

print([i for i in result])

Output: ['\xe0', '\xb8', '\x95', '\xe0', '\xb8', '\xb1', '\xe0', '\xb8', '\x94', '\xe0', '\xb8', '\x84', '\xe0', '\xb8', '\xb3', '\xe0', '\xb9', '\x84', '\xe0', '\xb8', '\x94', '\xe0', '\xb9', '\x89', '\xe0', '\xb8', '\x94', '\xe0', '\xb8', '\xb5', '\xe0', '\xb8', '\xa1', '\xe0', '\xb8', '\xb2', '\xe0', '\xb8', '\x81']

如何让它显示正确的标记化结果,还是有更好的方法来标记泰语文本?

1 个答案:

答案 0 :(得分:0)

您无需将其转换回utf-8:

试试吧:

import deepcut

thai = 'ตัดคำได้ดีมาก'
result = deepcut.tokenize(thai)

print([i for i in result])

输出:

['ตัด', 'คำ', 'ได้', 'ดี', 'มาก']

除此之外,您还可以尝试此Thai NLP Module