我有2个用于Skype的聊天机器人代码版本。一个在python3中,另一个在python2中。我当时使用Flask来运行这两个应用程序,但是烧瓶是用于开发的,在生产环境中遇到了问题。
我尝试通过gunicorn运行应用程序。 python3版本可以完美运行。 python2版本面临问题。当我尝试使用命令“ gunicorn -w 1 -b 0.0.0.0:5001 app:app --timeout 1500”运行该应用程序时,它开始按预期在端口上进行监听。但是,当我从聊天界面发送请求时,收到消息后,它将继续预测消息的类别。 这部分显然是由sklearn引发的错误,但是我无法弄清楚它的起源。 请检查下面显示的错误消息。
请检查下面显示的错误消息。
[2019-07-01 15:21:32,232] ERROR in app: Exception on /ief [GET]
Traceback (most recent call last):
File "/home/ubuntu/python2/local/lib/python2.7/site-packages/flask/app.py", line 1982, in wsgi_app
response = self.full_dispatch_request()
File "/home/ubuntu/python2/local/lib/python2.7/site-packages/flask/app.py", line 1614, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/home/ubuntu/python2/local/lib/python2.7/site-packages/flask/app.py", line 1517, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/home/ubuntu/python2/local/lib/python2.7/site-packages/flask/app.py", line 1612, in full_dispatch_request
rv = self.dispatch_request()
File "/home/ubuntu/python2/local/lib/python2.7/site-packages/flask/app.py", line 1598, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/home/ubuntu/sample/app.py", line 115, in intent_finder
intent,enty,response,sm_intent,domain,topic_flag,prev_chat= query_parser(question.decode('utf-8'),emp_name,chat_sess_id)
File "/home/ubuntu/sample/intent_entity_extractor_module.py", line 667, in query_parser
print(intent_classifier.predict(['who is usereeeeee']))
File "/home/ubuntu/python2/local/lib/python2.7/site-packages/sklearn/utils/metaestimators.py", line 115, in <lambda>
out = lambda *args, **kwargs: self.fn(obj, *args, **kwargs)
File "/home/ubuntu/python2/local/lib/python2.7/site-packages/sklearn/pipeline.py", line 306, in predict
Xt = transform.transform(Xt)
File "/home/ubuntu/python2/local/lib/python2.7/site-packages/sklearn/feature_extraction/text.py", line 1409, in transform
X = super(TfidfVectorizer, self).transform(raw_documents)
File "/home/ubuntu/python2/local/lib/python2.7/site-packages/sklearn/feature_extraction/text.py", line 923, in transform
_, X = self._count_vocab(raw_documents, fixed_vocab=True)
File "/home/ubuntu/python2/local/lib/python2.7/site-packages/sklearn/feature_extraction/text.py", line 792, in _count_vocab
for feature in analyze(doc):
File "/home/ubuntu/python2/local/lib/python2.7/site-packages/sklearn/feature_extraction/text.py", line 266, in <lambda>
tokenize(preprocess(self.decode(doc))), stop_words)
File "<ipython-input-3-dfaaaae57141>", line 10, in clean
NameError: global name 'pos_tag' is not defined
我怀疑来自nltk的pos_tag是问题。我交叉检查了所有进口。一切似乎都很好。我尝试在我的代码中打印pos_tag的输出。输出将按预期方式打印。但是,无法从sklearn找出此错误的根源。