filename='metamorphosis_clean.txt'
file=open(filename,'rt')
text=file.read()
file.close()
from nltk import sent_tokenize
sentences=sent_tokenize(text)
print(sentences[0])
Error:
Traceback (most recent call last):
File "split_into_sentenes.py", line 1, in <module>
import nltk
File "/usr/local/lib/python2.7/dist-packages/nltk/__init__.py", line 114, in <module>
from nltk.collocations import *
File "/usr/local/lib/python2.7/dist-packages/nltk/collocations.py", line 37, in <module>
from nltk.probability import FreqDist
File "/usr/local/lib/python2.7/dist-packages/nltk/probability.py", line 47, in <module>
from collections import defaultdict, Counter
File "/usr/local/lib/python2.7/dist-packages/nltk/collections.py", line 13, in <module>
import pydoc
File "/usr/lib/python2.7/pydoc.py", line 56, in <module>
import sys, imp, os, re, types, inspect, __builtin__, pkgutil, warnings
File "/usr/lib/python2.7/inspect.py", line 39, in <module>
import tokenize
File "/usr/lib/python2.7/tokenize.py", line 39, in <module>
COMMENT = N_TOKENS
NameError: name 'N_TOKENS' is not defined
答案 0 :(得分:0)
很可能在当前目录(即运行token.py
脚本的目录)中有一个名为split_into_sentenes.py
的文件。
如果本地存在token.py
,则会在standard library中的name
之前导入,这会导致您看到错误。
检查它是否存在,并在必要时将其重命名为与标准库不冲突的内容。