ImportError:没有名为nltk.classify

时间:2018-07-19 15:54:50

标签: python django docker-compose nltk

我正在使用python 2.7,Django 1.11.14,我对我的应用进行了docker化并且在执行docker-compose up时导入nltk.classify时遇到问题

我得到:

....
web_1  |   File "/code/personal/classifier.py", line 6, in <module>
web_1  |     from nltk.classify import ClassifierI
web_1  | ImportError: No module named nltk.classify

我在Requirements.txt中添加了几行,但仍然可以正常工作 requirements.txt:

Django==1.11.14
psycopg2
nltk
nltk.classify

Dockerfile:

FROM python:2
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
ADD requirements.txt /code/
RUN pip install -r requirements.txt
ADD . /code/

编辑1

实际上我发现了问题,我更改了文件 requirements.txt ,并做了 docker-compose up ,但这没有考虑 Dockerfile requirements.txt ,因此我只需要将所有库添加到 requirements.txt 并重建, 但是无论如何,导入停用词 nltk.corpus

我的要求。txt:

Django==1.11.14
psycopg2
nltk
statistics

并运行 docker-compose up 我得到:

web_1 | File "/usr/local/lib/python2.7/site-packages/nltk/corpus/util.py", line 81, in __load web_1 | except LookupError: raise e web_1 | LookupError: web_1 | ********************************************************************** web_1 | Resource stopwords not found. web_1 | Please use the NLTK Downloader to obtain the resource: web_1 | web_1 | >>> import nltk web_1 | >>> nltk.download('stopwords') web_1 | web_1 | Searched in: web_1 | - '/root/nltk_data' web_1 | - '/usr/share/nltk_data' web_1 | - '/usr/local/share/nltk_data' web_1 | - '/usr/lib/nltk_data' web_1 | - '/usr/local/lib/nltk_data' web_1 | - '/usr/local/nltk_data' web_1 | - '/usr/local/share/nltk_data' web_1 | - '/usr/local/lib/nltk_data' web_1 | **********************************************************************

2 个答案:

答案 0 :(得分:0)

CURL="curl -u <github_user> https://api.github.com/repos/<owner>/<repo>/releases"; \ ASSET_ID=$(eval "$CURL/tags/<tag>" | jq .assets[0].id); \ eval "$CURL/assets/$ASSET_ID -LJOH 'Accept: application/octet-stream'" 不是软件包,但包含在nltk.classify软件包中。由于命令ntlk可能会失败,因此不会安装任何软件包。尝试从RUN pip install -r requirements.txt中删除nltk.classify,然后重试。

答案 1 :(得分:0)

我刚刚在使用停用词的脚本中添加了以下几行:

import nltk nltk.download('stopwords')