我将在我的帐户受空间配额限制的群集上使用nltk.tokenize.word_tokenize
。在家里,我按nltk
下载了所有nltk.download()
资源,但据我发现,它需要大约2.5GB。
这对我来说似乎有点矫枉过正。你能否建议nltk.tokenize.word_tokenize
的最小(或几乎是最小)依赖关系是什么?到目前为止,我已经看过nltk.download('punkt')
,但我不确定它是否足够,大小是多少。为了使它能运作,我究竟应该运行什么?
答案 0 :(得分:21)
你是对的。你需要Punkt Tokenizer模型。它有13 MB,nltk.download('punkt')
应该可以做到。
答案 1 :(得分:5)
简而言之:
nltk.download('punkt')
就足够了。
长:
如果您打算使用NLTK
进行标记化,则无需下载NLTk中提供的所有模型和语料库。
实际上,如果您只是使用word_tokenize()
,那么您将不需要nltk.download()
中的任何资源。如果我们查看代码,基本上TreebankWordTokenizer的默认word_tokenize()
不应使用任何其他资源:
alvas@ubi:~$ ls nltk_data/
chunkers corpora grammars help models stemmers taggers tokenizers
alvas@ubi:~$ mv nltk_data/ tmp_move_nltk_data/
alvas@ubi:~$ python
Python 2.7.11+ (default, Apr 17 2016, 14:00:29)
[GCC 5.3.1 20160413] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> from nltk import word_tokenize
>>> from nltk.tokenize import TreebankWordTokenizer
>>> tokenizer = TreebankWordTokenizer()
>>> tokenizer.tokenize('This is a sentence.')
['This', 'is', 'a', 'sentence', '.']
可是:
alvas@ubi:~$ ls nltk_data/
chunkers corpora grammars help models stemmers taggers tokenizers
alvas@ubi:~$ mv nltk_data/ tmp_move_nltk_data
alvas@ubi:~$ python
Python 2.7.11+ (default, Apr 17 2016, 14:00:29)
[GCC 5.3.1 20160413] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> from nltk import sent_tokenize
>>> sent_tokenize('This is a sentence. This is another.')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python2.7/dist-packages/nltk/tokenize/__init__.py", line 90, in sent_tokenize
tokenizer = load('tokenizers/punkt/{0}.pickle'.format(language))
File "/usr/local/lib/python2.7/dist-packages/nltk/data.py", line 801, in load
opened_resource = _open(resource_url)
File "/usr/local/lib/python2.7/dist-packages/nltk/data.py", line 919, in _open
return find(path_, path + ['']).open()
File "/usr/local/lib/python2.7/dist-packages/nltk/data.py", line 641, in find
raise LookupError(resource_not_found)
LookupError:
**********************************************************************
Resource u'tokenizers/punkt/english.pickle' not found. Please
use the NLTK Downloader to obtain the resource: >>>
nltk.download()
Searched in:
- '/home/alvas/nltk_data'
- '/usr/share/nltk_data'
- '/usr/local/share/nltk_data'
- '/usr/lib/nltk_data'
- '/usr/local/lib/nltk_data'
- u''
**********************************************************************
>>> from nltk import word_tokenize
>>> word_tokenize('This is a sentence.')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python2.7/dist-packages/nltk/tokenize/__init__.py", line 106, in word_tokenize
return [token for sent in sent_tokenize(text, language)
File "/usr/local/lib/python2.7/dist-packages/nltk/tokenize/__init__.py", line 90, in sent_tokenize
tokenizer = load('tokenizers/punkt/{0}.pickle'.format(language))
File "/usr/local/lib/python2.7/dist-packages/nltk/data.py", line 801, in load
opened_resource = _open(resource_url)
File "/usr/local/lib/python2.7/dist-packages/nltk/data.py", line 919, in _open
return find(path_, path + ['']).open()
File "/usr/local/lib/python2.7/dist-packages/nltk/data.py", line 641, in find
raise LookupError(resource_not_found)
LookupError:
**********************************************************************
Resource u'tokenizers/punkt/english.pickle' not found. Please
use the NLTK Downloader to obtain the resource: >>>
nltk.download()
Searched in:
- '/home/alvas/nltk_data'
- '/usr/share/nltk_data'
- '/usr/local/share/nltk_data'
- '/usr/lib/nltk_data'
- '/usr/local/lib/nltk_data'
- u''
**********************************************************************
但如果我们看一下https://github.com/nltk/nltk/blob/develop/nltk/tokenize/init.py#L93,情况就不是这样了。似乎word_tokenize
已隐式调用sent_tokenize()
,需要punkt
模型。
我不确定这是一个错误还是一个功能,但似乎旧的习语可能会因当前代码而过时:
>>> from nltk import sent_tokenize, word_tokenize
>>> sentences = 'This is a foo bar sentence. This is another sentence.'
>>> tokenized_sents = [word_tokenize(sent) for sent in sent_tokenize(sentences)]
>>> tokenized_sents
[['This', 'is', 'a', 'foo', 'bar', 'sentence', '.'], ['This', 'is', 'another', 'sentence', '.']]
可以简单地说:
>>> word_tokenize(sentences)
['This', 'is', 'a', 'foo', 'bar', 'sentence', '.', 'This', 'is', 'another', 'sentence', '.']
但是我们看到word_tokenize()
将字符串列表的列表展平为单个字符串列表。
或者,您可以尝试使用新的标记器,该标记器将根据toktok.py
添加到NLTK https://github.com/jonsafari/tok-tok,而不需要预先训练的模型。
答案 2 :(得分:0)
如果您在 lambda 中有大量 NLTK 泡菜,则代码编辑器将无法编辑。使用 Lambda 层。您可以只上传 NLTK 数据并将数据包含在如下代码中。
nltk.data.path.append("/opt/tmp_nltk")
答案 3 :(得分:-1)
import nltk
nltk.download('punkt')
from nltk.tokenize import sent_tokenize, word_tokenize
EXAMPLE_TEXT = "Hello Mr.Smith,how are you doing today?"
print(sent_tokenize(EXAMPLE_TEXT))