我想使用Python Stanford NER模块,但一直收到错误,我在互联网上搜索但没有得到任何结果。以下是错误的基本用法。
import ner
tagger = ner.HttpNER(host='localhost', port=8080)
tagger.get_entities("University of California is located in California,
United States")
错误
Traceback (most recent call last):
File "<pyshell#3>", line 1, in <module>
tagger.get_entities("University of California is located in California, United States")
File "C:\Python27\lib\site-packages\ner\client.py", line 81, in get_entities
tagged_text = self.tag_text(text)
File "C:\Python27\lib\site-packages\ner\client.py", line 165, in tag_text
c.request('POST', self.location, params, headers)
File "C:\Python27\lib\httplib.py", line 1057, in request
self._send_request(method, url, body, headers)
File "C:\Python27\lib\httplib.py", line 1097, in _send_request
self.endheaders(body)
File "C:\Python27\lib\httplib.py", line 1053, in endheaders
self._send_output(message_body)
File "C:\Python27\lib\httplib.py", line 897, in _send_output
self.send(msg)
File "C:\Python27\lib\httplib.py", line 859, in send
self.connect()
File "C:\Python27\lib\httplib.py", line 836, in connect
self.timeout, self.source_address)
File "C:\Python27\lib\socket.py", line 575, in create_connection
raise err
error: [Errno 10061] No connection could be made because the target machine actively refused it
使用安装了最新Java的Windows 10
答案 0 :(得分:1)
NER自带
一个用于windows的.bat
文件和一个用于unix / linux的.sh
文件。我认为
这些文件以GUI
要在没有GUI
的情况下启动服务,您应该运行与此类似的命令:
java -mx600m -cp stanford-ner.jar edu.stanford.nlp.ie.crf.CRFClassifier -loadClassifier classifiers/english.all.3class.distsim.crf.ser.gz
这将运行NER jar,设置内存,并设置您要使用的分类器。 (我想你必须在斯坦福NER目录中运行它)
一旦NER程序运行,您就可以运行python代码并查询NER。
答案 1 :(得分:0)
此代码将从“ TextFilestoTest”文件夹中读取每个文本文件,并检测实体并将其存储在数据框中(测试)
import os
import nltk
import pandas as pd
import collections
from nltk.tag import StanfordNERTagger
from nltk.tokenize import word_tokenize
stanford_classifier = 'ner-trained-EvensTrain.ser.gz'
stanford_ner_path = 'stanford-ner.jar'
# Creating Tagger Object
st = StanfordNERTagger(stanford_classifier, stanford_ner_path, encoding='utf-8')
java_path = "C:/Program Files (x86)/Java/jre1.8.0_191/bin/java.exe"
os.environ['JAVAHOME'] = java_path
def get_continuous_chunks(tagged_sent):
continuous_chunk = []
current_chunk = []
for token, tag in tagged_sent:
if tag != "0":
current_chunk.append((token, tag))
else:
if current_chunk: # if the current chunk is not empty
continuous_chunk.append(current_chunk)
current_chunk = []
# Flush the final current_chunk into the continuous_chunk, if any.
if current_chunk:
continuous_chunk.append(current_chunk)
return continuous_chunk
TestFiles = './TextFilestoTest/'
files_path = os.listdir(TestFiles)
Test = {}
for i in files_path:
p = (TestFiles+i)
g= (os.path.splitext(i)[0])
Test[str(g)] = open(p, 'r').read()
## Predict labels of all words of 200 text files and inserted into dataframe
df_fin = pd.DataFrame(columns = ["filename","Word","Label"])
for i in Test:
test_text = Test[i]
test_text = test_text.replace("\n"," ")
tokenized_text = test_text.split(" ")
classified_text = st.tag(tokenized_text)
ne_tagged_sent = classified_text
named_entities = get_continuous_chunks(ne_tagged_sent)
flat_list = [item for sublist in named_entities for item in sublist]
for fl in flat_list:
df_ = pd.DataFrame()
df_["filename"] = [i]
df_["Word"] = [fl[0]]
df_["Label"] = [fl[1]]
df_fin = df_fin.append(df_)
df_fin_vone = pd.DataFrame(columns = ["filename","Word","Label"])
test_files_len = list(set(df_fin['filename']))
如果下面有任何问题发表评论,我会回答。谢谢