在熊猫数据框中编码文本列

时间:2019-07-12 18:35:32

标签: python

我在哪里出错呢?我正在尝试遍历数据框的每一行并对文本进行编码。

data['text'] = data.apply(lambda row: 
    codecs(row['text'], "r", 'utf-8'), axis=1)

我收到此错误-为什么uft编码会影响代码的一部分,如果不运行UTF编码,我不会收到错误:

    TypeError                                 Traceback (most recent call last)
    <ipython-input-101-0e1d5977a3b3> in <module>
    ----> 1 data['text'] = codecs(data['text'], "r", 'utf-8')
          2 
          3 data['text'] = data.apply(lambda row: 
          4     codecs(row['text'], "r", 'utf-8'), axis=1)

    TypeError: 'module' object is not callable

当我应用解决方案时,两者都可以工作,但是出现此错误:

    data['text_tokens'] = data.apply(lambda row: 
        nltk.word_tokenize(row['text']), axis=1)

错误:

---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-138-73972d522748> in <module>
      1 data['text_tokens'] = data.apply(lambda row: 
----> 2     nltk.word_tokenize(row['text']), axis=1)

~/env/lib64/python3.6/site-packages/pandas/core/frame.py in apply(self, func, axis, broadcast, raw, reduce, result_type, args, **kwds)
   6485                          args=args,
   6486                          kwds=kwds)
-> 6487         return op.get_result()
   6488 
   6489     def applymap(self, func):

~/env/lib64/python3.6/site-packages/pandas/core/apply.py in get_result(self)
    149             return self.apply_raw()
    150 
--> 151         return self.apply_standard()
    152 
    153     def apply_empty_result(self):

~/env/lib64/python3.6/site-packages/pandas/core/apply.py in apply_standard(self)
    255 
    256         # compute the result using the series generator
--> 257         self.apply_series_generator()
    258 
    259         # wrap results

~/env/lib64/python3.6/site-packages/pandas/core/apply.py in apply_series_generator(self)
    284             try:
    285                 for i, v in enumerate(series_gen):
--> 286                     results[i] = self.f(v)
    287                     keys.append(v.name)
    288             except Exception as e:

<ipython-input-138-73972d522748> in <lambda>(row)
      1 data['text_tokens'] = data.apply(lambda row: 
----> 2     nltk.word_tokenize(row['text']), axis=1)

~/env/lib64/python3.6/site-packages/nltk/tokenize/__init__.py in word_tokenize(text, language, preserve_line)
    142     :type preserve_line: bool
    143     """
--> 144     sentences = [text] if preserve_line else sent_tokenize(text, language)
    145     return [
    146         token for sent in sentences for token in _treebank_word_tokenizer.tokenize(sent)

~/env/lib64/python3.6/site-packages/nltk/tokenize/__init__.py in sent_tokenize(text, language)
    104     """
    105     tokenizer = load('tokenizers/punkt/{0}.pickle'.format(language))
--> 106     return tokenizer.tokenize(text)
    107 
    108 

~/env/lib64/python3.6/site-packages/nltk/tokenize/punkt.py in tokenize(self, text, realign_boundaries)
   1275         Given a text, returns a list of the sentences in that text.
   1276         """
-> 1277         return list(self.sentences_from_text(text, realign_boundaries))
   1278 
   1279     def debug_decisions(self, text):

~/env/lib64/python3.6/site-packages/nltk/tokenize/punkt.py in sentences_from_text(self, text, realign_boundaries)
   1329         follows the period.
   1330         """
-> 1331         return [text[s:e] for s, e in self.span_tokenize(text, realign_boundaries)]
   1332 
   1333     def _slices_from_text(self, text):

~/env/lib64/python3.6/site-packages/nltk/tokenize/punkt.py in <listcomp>(.0)
   1329         follows the period.
   1330         """
-> 1331         return [text[s:e] for s, e in self.span_tokenize(text, realign_boundaries)]
   1332 
   1333     def _slices_from_text(self, text):

~/env/lib64/python3.6/site-packages/nltk/tokenize/punkt.py in span_tokenize(self, text, realign_boundaries)
   1319         if realign_boundaries:
   1320             slices = self._realign_boundaries(text, slices)
-> 1321         for sl in slices:
   1322             yield (sl.start, sl.stop)
   1323 

~/env/lib64/python3.6/site-packages/nltk/tokenize/punkt.py in _realign_boundaries(self, text, slices)
   1360         """
   1361         realign = 0
-> 1362         for sl1, sl2 in _pair_iter(slices):
   1363             sl1 = slice(sl1.start + realign, sl1.stop)
   1364             if not sl2:

~/env/lib64/python3.6/site-packages/nltk/tokenize/punkt.py in _pair_iter(it)
    316     it = iter(it)
    317     try:
--> 318         prev = next(it)
    319     except StopIteration:
    320         return

~/env/lib64/python3.6/site-packages/nltk/tokenize/punkt.py in _slices_from_text(self, text)
   1333     def _slices_from_text(self, text):
   1334         last_break = 0
-> 1335         for match in self._lang_vars.period_context_re().finditer(text):
   1336             context = match.group() + match.group('after_tok')
   1337             if self.text_contains_sentbreak(context):

TypeError: ('cannot use a string pattern on a bytes-like object', 'occurred at index 0')

2 个答案:

答案 0 :(得分:2)

编码

正如第一个错误所述,codecs是不可调用的。实际上就是模块的名称。

您可能想要:

data['text'] = data.apply(lambda row: 
    codecs.encode(row['text'], 'utf-8'), axis=1)

令牌化

word_tokenize引发的错误是由于该函数在先前编码的字符串上使用的事实所致:codecs.encode将文本呈现为字节literal字符串。
来自codecs doc

  

大多数标准编解码器是文本编码,将文本编码为字节,但是也提供了将文本编码为文本,字节编码为字节的编解码器。

word_tokenize不适用于字面量的字节,就像错误说(错误回溯的最后一行)一样。
如果删除编码段落,它将起作用。


关于您对视频的担心:前缀u表示 unicode 1
前缀b表示字节常量2如果在使用codecs.encode之后打印数据帧,则为字符串的前缀。
在python 3中(从追溯中可以看到,您的版本是3.6),默认字符串类型为Unicode,因此u是多余的,通常没有显示,但是字符串已经是unicode了。
因此,我非常确定您是安全的:您可以放心使用codecs.encode

答案 1 :(得分:2)

您甚至可以做一些更简单的事情:

df['text'] = df['text'].str.encode('utf-8')

参考:https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.str.encode.html