Nltk word tokenizer将结尾单引号视为单独的单词

时间:2018-03-26 20:06:48

标签: python nltk

这是来自IPython笔记本的代码片段:

test = "'v'"
words = word_tokenize(test)
words

输出是:

["'v", "'"]

正如您所看到的那样,结尾的单引号被视为一个单独的单词,而第一个单引号则被视为" v"的一部分。我想要

["'v'"]

["'", "v", "'"]

有没有办法实现这个目标?

2 个答案:

答案 0 :(得分:2)

尝试from nltk.tokenize.moses import MosesTokenizer, MosesDetokenizer

from nltk.tokenize.moses import MosesTokenizer, MosesDetokenizer
t, d = MosesTokenizer(), MosesDetokenizer()
tokens = t.tokenize(test)
tokens
[''v'']

其中' = '

您还可以使用escape=False参数来防止转义XML特殊字符:

>>> m.tokenize("'v'", escape=False)
["'v'"]

保持'v'的输出与original Moses tokenizer一致,即

~/mosesdecoder/scripts/tokenizer$ perl tokenizer.perl -l en < x
Tokenizer Version 1.1
Language: en
Number of threads: 1
&apos;v&apos;

如果您希望探索并处理单引号,则有other tokenizers

答案 1 :(得分:2)

似乎它不是一个错误,而是nltk.word_tokenize()的预期输出。

这与Robert McIntyre的tokenizer.sed

的Treebank单词标记器一致
$ sed -f tokenizer.sed 
'v'
'v ' 

正如@Prateek指出的那样,您可以尝试其他可能符合您需求的标记器。

更有趣的问题是为什么起始单引号会粘在以下字符上?

我们不能像https://github.com/nltk/nltk/blob/develop/nltk/tokenize/init.py那样破解TreebankWordTokenizer吗?

import re

from nltk.tokenize.treebank import TreebankWordTokenizer

# Standard word tokenizer.
_treebank_word_tokenizer = TreebankWordTokenizer()

# See discussion on https://github.com/nltk/nltk/pull/1437
# Adding to TreebankWordTokenizer, the splits on
# - chervon quotes u'\xab' and u'\xbb' .
# - unicode quotes u'\u2018', u'\u2019', u'\u201c' and u'\u201d'

improved_open_quote_regex = re.compile(u'([«“‘„]|[`]+|[\']+)', re.U)
improved_close_quote_regex = re.compile(u'([»”’])', re.U)
improved_punct_regex = re.compile(r'([^\.])(\.)([\]\)}>"\'' u'»”’ ' r']*)\s*$', re.U)
_treebank_word_tokenizer.STARTING_QUOTES.insert(0, (improved_open_quote_regex, r' \1 '))
_treebank_word_tokenizer.ENDING_QUOTES.insert(0, (improved_close_quote_regex, r' \1 '))
_treebank_word_tokenizer.PUNCTUATION.insert(0, (improved_punct_regex, r'\1 \2 \3 '))

_treebank_word_tokenizer.tokenize("'v'")

[OUT]:

["'", 'v', "'"]

是的,修改将适用于OP中的字符串,但它会开始打破所有clitics,例如。

>>> print(_treebank_word_tokenizer.tokenize("'v', I've been fooled but I'll seek revenge."))
["'", 'v', "'", ',', 'I', "'", 've', 'been', 'fooled', 'but', 'I', "'", 'll', 'seek', 'revenge', '.']

请注意,原始nltk.word_tokenize()会将起始单引号保留为clitics并输出:

>>> print(nltk.word_tokenize("'v', I've been fooled but I'll seek revenge."))
["'v", "'", ',', 'I', "'ve", 'been', 'fooled', 'but', 'I', "'ll", 'seek', 'revenge', '.']

https://github.com/nltk/nltk/blob/develop/nltk/tokenize/treebank.py#L268

的阴蒂之后,有一些策略可以处理结束引号而不是起始引号

但这个&#34;问题的主要原因是&#34;是因为Word Tokenizer没有平衡引号的感觉。如果我们查看MosesTokenizer,可以使用更多处理引号的机制。

有趣的是,斯坦福CoreNLP并没有这样做。

在终端:

wget http://nlp.stanford.edu/software/stanford-corenlp-full-2016-10-31.zip
unzip stanford-corenlp-full-2016-10-31.zip && cd stanford-corenlp-full-2016-10-31

java -mx4g -cp "*" edu.stanford.nlp.pipeline.StanfordCoreNLPServer \
-preload tokenize,ssplit,pos,lemma,parse,depparse \
-status_port 9000 -port 9000 -timeout 15000

的Python:

>>> from nltk.parse.corenlp import CoreNLPParser
>>> parser = CoreNLPParser()
>>> parser.tokenize("'v'")
<generator object GenericCoreNLPParser.tokenize at 0x1148f9af0>
>>> list(parser.tokenize("'v'"))
["'", 'v', "'"]
>>> list(parser.tokenize("I've"))
['I', "'", 've']
>>> list(parser.tokenize("I've'"))
['I', "'ve", "'"]
>>> list(parser.tokenize("I'lk'"))
['I', "'", 'lk', "'"]
>>> list(parser.tokenize("I'lk"))
['I', "'", 'lk']
>>> list(parser.tokenize("I'll"))
['I', "'", 'll']

看起来有某种正则表达式黑客入侵以识别/纠正英国阴谋

如果我们做一些逆向工程:

>>> list(parser.tokenize("'re"))
["'", 're']
>>> list(parser.tokenize("you're"))
['you', "'", 're']
>>> list(parser.tokenize("you're'"))
['you', "'re", "'"]
>>> list(parser.tokenize("you 're'"))
['you', "'re", "'"]
>>> list(parser.tokenize("you the 're'"))
['you', 'the', "'re", "'"]

可以将正则表达式添加到修补程序word_tokenize,例如

>>> import re
>>> pattern = re.compile(r"(?i)(\')(?!ve|ll|t)(\w)\b")
>>> pattern.sub(r'\1 \2', x)
"I'll be going home I've the ' v ' isn't want I want to split but I want to catch tokens like ' v and ' w ' ."
>>> x = "I 'll be going home I 've the 'v ' isn't want I want to split but I want to catch tokens like 'v and 'w ' ."
>>> pattern.sub(r'\1 \2', x)
"I 'll be going home I 've the ' v ' isn't want I want to split but I want to catch tokens like ' v and ' w ' ."

所以我们可以这样做:

import re
from nltk.tokenize.treebank import TreebankWordTokenizer

# Standard word tokenizer.
_treebank_word_tokenizer = TreebankWordTokenizer()

# See discussion on https://github.com/nltk/nltk/pull/1437
# Adding to TreebankWordTokenizer, the splits on
# - chervon quotes u'\xab' and u'\xbb' .
# - unicode quotes u'\u2018', u'\u2019', u'\u201c' and u'\u201d'

improved_open_quote_regex = re.compile(u'([«“‘„]|[`]+)', re.U)
improved_open_single_quote_regex = re.compile(r"(?i)(\')(?!re|ve|ll|m|t|s|d)(\w)\b", re.U)
improved_close_quote_regex = re.compile(u'([»”’])', re.U)
improved_punct_regex = re.compile(r'([^\.])(\.)([\]\)}>"\'' u'»”’ ' r']*)\s*$', re.U)
_treebank_word_tokenizer.STARTING_QUOTES.insert(0, (improved_open_quote_regex, r' \1 '))
_treebank_word_tokenizer.STARTING_QUOTES.append((improved_open_single_quote_regex, r'\1 \2'))
_treebank_word_tokenizer.ENDING_QUOTES.insert(0, (improved_close_quote_regex, r' \1 '))
_treebank_word_tokenizer.PUNCTUATION.insert(0, (improved_punct_regex, r'\1 \2 \3 '))

def word_tokenize(text, language='english', preserve_line=False):
    """
    Return a tokenized copy of *text*,
    using NLTK's recommended word tokenizer
    (currently an improved :class:`.TreebankWordTokenizer`
    along with :class:`.PunktSentenceTokenizer`
    for the specified language).

    :param text: text to split into words
    :type text: str
    :param language: the model name in the Punkt corpus
    :type language: str
    :param preserve_line: An option to keep the preserve the sentence and not sentence tokenize it.
    :type preserver_line: bool
    """
    sentences = [text] if preserve_line else sent_tokenize(text, language)
    return [token for sent in sentences
            for token in _treebank_word_tokenizer.tokenize(sent)]

[OUT]:

>>> print(word_tokenize("The 'v', I've been fooled but I'll seek revenge."))
['The', "'", 'v', "'", ',', 'I', "'ve", 'been', 'fooled', 'but', 'I', "'ll", 'seek', 'revenge', '.']
>>> word_tokenize("'v' 're'")
["'", 'v', "'", "'re", "'"]