nltk stemmer:字符串索引超出范围

时间:2017-01-07 03:48:44

标签: nlp nltk stemming porter-stemmer

我有一组腌制文本文档,我想使用nltk的PorterStemmer来阻止它。由于特定于我的项目的原因,我想在django应用程序视图中进行干预。

但是,当在django视图中阻止文档时,我会从IndexError: string index out of range收到PorterStemmer().stem()个字符串'oed'。结果,运行以下内容:

# xkcd_project/search/views.py
from nltk.stem.porter import PorterStemmer

def get_results(request):
    s = PorterStemmer()
    s.stem('oed')
    return render(request, 'list.html')

提出了上述错误:

Traceback (most recent call last):
  File "//anaconda/envs/xkcd/lib/python2.7/site-packages/django/core/handlers/exception.py", line 39, in inner
    response = get_response(request)
  File "//anaconda/envs/xkcd/lib/python2.7/site-packages/django/core/handlers/base.py", line 187, in _get_response
    response = self.process_exception_by_middleware(e, request)
  File "//anaconda/envs/xkcd/lib/python2.7/site-packages/django/core/handlers/base.py", line 185, in _get_response
    response = wrapped_callback(request, *callback_args, **callback_kwargs)
  File "/Users/jkarimi91/Projects/xkcd_search/xkcd_project/search/views.py", line 15, in get_results
    s.stem('oed')
  File "//anaconda/envs/xkcd/lib/python2.7/site-packages/nltk/stem/porter.py", line 665, in stem
    stem = self._step1b(stem)
  File "//anaconda/envs/xkcd/lib/python2.7/site-packages/nltk/stem/porter.py", line 376, in _step1b
    lambda stem: (self._measure(stem) == 1 and
  File "//anaconda/envs/xkcd/lib/python2.7/site-packages/nltk/stem/porter.py", line 258, in _apply_rule_list
    if suffix == '*d' and self._ends_double_consonant(word):
  File "//anaconda/envs/xkcd/lib/python2.7/site-packages/nltk/stem/porter.py", line 214, in _ends_double_consonant
    word[-1] == word[-2] and
IndexError: string index out of range

现在真正奇怪的是在django外面的相同字符串上运行相同的词干分析器(无论是单独的python文件还是交互式python控制台)都不会产生错误。换句话说:

# test.py
from nltk.stem.porter import PorterStemmer
s = PorterStemmer()
print s.stem('oed')

接下来是:

python test.py
# successfully prints 'o'

是什么导致了这个问题?

2 个答案:

答案 0 :(得分:29)

这是一个特定于NLTK版本3.2.2的NLTK错误,我应该为此负责。它是由PR https://github.com/nltk/nltk/pull/1261引入的,它重写了Porter词干分析器。

我在NLTK 3.2.3中写了a fix。如果您使用的是3.2.2版并想要修复,只需升级 - 例如通过运行

pip install -U nltk

答案 1 :(得分:3)

我使用nltk.stem.porter调试了pdb模块。经过几次迭代后,在_apply_rule_list()中得到:

>>> rule
(u'at', u'ate', None)
>>> word
u'o'

此时_ends_double_consonant()方法会尝试执行word[-1] == word[-2]并失败。

如果我没记错,在NLTK中3.2 relative method如下:

def _doublec(self, word):
    """doublec(word) is TRUE <=> word ends with a double consonant"""
    if len(word) < 2:
        return False
    if (word[-1] != word[-2]):      
        return False        
    return self._cons(word, len(word)-1)

据我所知,新版本中缺少len(word) < 2项检查。

_ends_double_consonant()更改为类似的内容应该有效:

def _ends_double_consonant(self, word):
      """Implements condition *d from the paper

      Returns True if word ends with a double consonant
      """
      if len(word) < 2:
          return False
      return (
          word[-1] == word[-2] and
          self._is_consonant(word, len(word)-1)
      )

我刚刚在相关的NLTK问题上提出了这一变化。