我正在尝试使用 Stanford NER 和 Stanford POS Tagger 来解析大约23000个文档。我使用以下伪代码实现了它 -
`for each in document:
eachSentences = PunktTokenize(each)
#code to generate NER Tagger
#code to generate POS Taggers on the above output`
对于具有15 GB RAM的4核机器,NER的运行时间约为945小时。我试图通过使用“线程”库来加强操作,但是我收到以下错误 -
`Exception in thread Thread-2:
Traceback (most recent call last):
File "/usr/lib/python2.7/threading.py", line 801, in __bootstrap_inner
self.run()
File "/usr/lib/python2.7/threading.py", line 754, in run
self.__target(*self.__args, **self.__kwargs)
File "removeStopWords.py", line 75, in partofspeechRecognition
listOfRes_new = namedEntityRecognition(listRes[min:max])
File "removeStopWords.py", line 63, in namedEntityRecognition
listRes_ner.append(namedEntityRecognitionResume(eachResSentence))
File "removeStopWords.py", line 50, in namedEntityRecognitionResume
ner2Tags = ner2.tag(each.title().split())
File "/home/datascience/pythonEnv/local/lib/python2.7/site-packages/nltk/tag/stanford.py", line 71, in tag
return sum(self.tag_sents([tokens]), [])
File "/home/datascience/pythonEnv/local/lib/python2.7/site-packages/nltk/tag/stanford.py", line 98, in tag_sents
os.unlink(self._input_file_path)
OSError: [Errno 2] No such file or directory: '/tmp/tmpvMNqwB'`
我正在使用NLTK版本 - 3.2.1,Stanford NER,POS - 3.7.0 jar文件,以及线程模块。据我所知,这可能是由于/ tmp上的线程锁定。 如果我错了,请纠正我,还有使用线程运行上述内容的最佳方法或更好的方法来实现它。
我正在使用3 Class Classifier for NER和Maxent POS Tagger
P.S。 - 请忽略Python文件的名称,我仍然没有删除原始文本中的停用词或标点符号。
编辑 - 使用cProfile,并按累计时间排序,我得到了以下前20个电话
600792 function calls (595912 primitive calls) in 60.795 seconds
Ordered by: cumulative time
List reduced from 3357 to 20 due to restriction <20>
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.000 0.000 60.811 60.811 removeStopWords.py:1(<module>)
1 0.000 0.000 58.923 58.923 removeStopWords.py:76(partofspeechRecognition)
28 0.001 0.000 58.883 2.103 /home/datascience/pythonEnv/local/lib/python2.7/site-packages/nltk/tag/stanford.py:69(tag)
28 0.004 0.000 58.883 2.103 /home/datascience/pythonEnv/local/lib/python2.7/site-packages/nltk/tag/stanford.py:73(tag_sents)
28 0.001 0.000 56.927 2.033 /home/datascience/pythonEnv/local/lib/python2.7/site-packages/nltk/internals.py:63(java)
141 0.001 0.000 56.532 0.401 /usr/lib/python2.7/subprocess.py:769(communicate)
140 0.002 0.000 56.530 0.404 /usr/lib/python2.7/subprocess.py:1408(_communicate)
140 0.008 0.000 56.492 0.404 /usr/lib/python2.7/subprocess.py:1441(_communicate_with_poll)
400 56.474 0.141 56.474 0.141 {built-in method poll}
1 0.001 0.001 43.522 43.522 removeStopWords.py:69(partofspeechRecognitionRes)
1 0.000 0.000 15.401 15.401 removeStopWords.py:62(namedEntityRecognition)
1 0.001 0.001 15.367 15.367 removeStopWords.py:46(namedEntityRecognitionRes)
141 0.004 0.000 2.302 0.016 /usr/lib/python2.7/subprocess.py:651(__init__)
141 0.020 0.000 2.287 0.016 /usr/lib/python2.7/subprocess.py:1199(_execute_child)
56 0.002 0.000 1.933 0.035 /home/datascience/pythonEnv/local/lib/python2.7/site-packages/nltk/internals.py:38(config_java)
56 0.001 0.000 1.931 0.034 /home/datascience/pythonEnv/local/lib/python2.7/site-packages/nltk/internals.py:599(find_binary)
112 0.002 0.000 1.930 0.017 /home/datascience/pythonEnv/local/lib/python2.7/site-packages/nltk/internals.py:582(find_binary_iter)
118 0.009 0.000 1.928 0.016 /home/datascience/pythonEnv/local/lib/python2.7/site-packages/nltk/internals.py:453(find_file_iter)
1 0.001 0.001 1.318 1.318 /usr/lib/python2.7/pickle.py:1383(load)
1 0.046 0.046 1.317 1.317 /usr/lib/python2.7/pickle.py:851(load)
答案 0 :(得分:1)
似乎Python包装器是罪魁祸首。 Java实现并没有花费太多时间。这大约需要@Gabor Angeli提到的内容。试试吧。
希望它有所帮助!
答案 1 :(得分:0)
也许这已经解决了,但是对于那些试图用Python加速Stanford NLP的人们来说,这是久经考验的答案。.How to speedup Stanford NLP in Python?
基本上,它要求您在后端运行NER服务器并调用sner库,然后进一步执行所有与Stanford NLP相关的任务。
找到答案。
在解压缩Stanford NLP的文件夹中,在后台启动Stanford NLP Server。
下面给出的答案的一部分。
java -Djava.ext.dirs=./lib -cp stanford-ner.jar edu.stanford.nlp.ie.NERServer -port 9199 -loadClassifier ./classifiers/english.all.3class.distsim.crf.ser.gz
Then initiate Stanford NLP Server tagger in Python using sner library.
from sner import Ner
tagger = Ner(host='localhost',port=9199)
然后运行标记器。
%%time
classified_text=tagger.get_entities(text)
print (classified_text)
Output:
[('My', 'O'), ('name', 'O'), ('is', 'O'), ('John', 'PERSON'), ('Doe', 'PERSON')]
CPU times: user 4 ms, sys: 0 ns, total: 4 ms
Wall time: 18.2 ms