谢谢大家!尽管我仍然不知道为什么,但我使用所有设置作为默认设置重新安装了整个anaconda
,现在可以正常使用了。
我在系统中的默认路径不是安装在驱动器C:
中,而是在自己创建的路径D:\Ana
中安装了Anaconda。然后我用conda install -c conda-forge scrapy
安装了Scrapy。
当我使用scrapy startproject tutorial
时,出现以下错误:
> C:\Users\zhang>scrapy startproject tutorial Traceback (most recent
> call last): File "D:\Ana\Scripts\scrapy-script.py", line 6, in
> <module>
> from scrapy.cmdline import execute File "D:\Ana\lib\site-packages\scrapy\__init__.py", line 34, in <module>
> from scrapy.spiders import Spider File "D:\Ana\lib\site-packages\scrapy\spiders\__init__.py", line 10, in
> <module>
> from scrapy.http import Request File "D:\Ana\lib\site-packages\scrapy\http\__init__.py", line 11, in
> <module>
> from scrapy.http.request.form import FormRequest File "D:\Ana\lib\site-packages\scrapy\http\request\form.py", line 11, in
> <module>
> import lxml.html File "D:\Ana\lib\site-packages\lxml\html\__init__.py", line 53, in <module>
> from .. import etree ImportError: DLL load failed: 找不到指定的模块。(translate: cannot found the module)
我尝试再次删除并安装scrapy
,但仍然无法正常工作。
你能给点建议吗?
谢谢!
更新:
我重新安装了lxml
,然后看起来更好,因为当我在scrapy
中键入cmd
时,它显示:
> C:\Users\zhang>scrapy Scrapy 1.6.0 - no active project
>
> Usage: scrapy <command> [options] [args]
>
> Available commands: bench Run quick benchmark test fetch
> Fetch a URL using the Scrapy downloader genspider Generate new
> spider using pre-defined templates runspider Run a
> self-contained spider (without creating a project) settings Get
> settings values shell Interactive scraping console
> startproject Create new project version Print Scrapy version
> view Open URL in browser, as seen by Scrapy
>
> [ more ] More commands available when run from project
> directory
>
> Use "scrapy <command> -h" to see more info about a command
但是,如果我输入scrapy startproject XXX
,则会显示:
> C:\Users\zhang>scrapy startproject Traceback (most recent call last):
> File "D:\Ana\Scripts\scrapy-script.py", line 10, in <module>
> sys.exit(execute()) File "D:\Ana\lib\site-packages\scrapy\cmdline.py", line 149, in execute
> cmd.crawler_process = CrawlerProcess(settings) File "D:\Ana\lib\site-packages\scrapy\crawler.py", line 254, in __init__
> log_scrapy_info(self.settings) File "D:\Ana\lib\site-packages\scrapy\utils\log.py", line 149, in
> log_scrapy_info
> for name, version in scrapy_components_versions() File "D:\Ana\lib\site-packages\scrapy\utils\versions.py", line 35, in
> scrapy_components_versions
> ("pyOpenSSL", _get_openssl_version()), File "D:\Ana\lib\site-packages\scrapy\utils\versions.py", line 43, in
> _get_openssl_version
> import OpenSSL File "D:\Ana\lib\site-packages\OpenSSL\__init__.py", line 8, in <module>
> from OpenSSL import crypto, SSL File "D:\Ana\lib\site-packages\OpenSSL\crypto.py", line 16, in <module>
> from OpenSSL._util import ( File "D:\Ana\lib\site-packages\OpenSSL\_util.py", line 6, in <module>
> from cryptography.hazmat.bindings.openssl.binding import Binding File
> "D:\Ana\lib\site-packages\cryptography\hazmat\bindings\openssl\binding.py",
> line 195, in <module>
> Binding.init_static_locks() File "D:\Ana\lib\site-packages\cryptography\hazmat\bindings\openssl\binding.py",
> line 142, in init_static_locks
> __import__("_ssl") ImportError: DLL load failed: 找不到指定的程序。(translate: cannot find program)