scrapy开始一个新项目

时间:2016-10-18 09:39:18

标签: python python-2.7 web-scraping scrapy scrapy-spider

我在Windows 7系统上安装了python 2.7.12版本。我还安装了pywin32和Visual C ++。当我输入命令pip --version时,它不会产生任何输出,光标移动到下一行并闪烁。

但是当我使用命令python -m pip --version时,会显示pip的版本。另外要安装scrapy,我必须使用命令python -m pip install scrapy。 Scrapy安装成功。

我已正确设置环境变量中的路径 - C:\Python27;C:\Python27\Scripts;

当我不得不在scrapy中启动我的新项目时,我使用了命令scrapy startproject project_name。同样,光标移动到下一行并闪烁。甚至没有任何错误消息也没有生成结果。

当我一次又一次地尝试时,它在目录中创建了具有相应文件的文件夹。

当我开发代码并尝试再次通过命令scrapy crawl name运行蜘蛛时,出现了同样的问题 - 没有响应。

现在,由于出现同样的问题,我无法创建新项目。

如果有人可以请说明错误的可能原因和解决方案。

它成功了 当我使用命令python -m scrapy <command> <arguments?来遵循scrapy教程时。但是直到我运行crawl命令才行。当我使用python -m scrapy.cmdline shell 'http://quotes.toscrape.com/page/1/'命令时,它显示错误

C:\Users\MinorMiracles\Desktop\tutorial>python -m scrapy.cmdline crawl quotes
2016-10-19 10:26:15 [scrapy] INFO: Scrapy 1.2.0 started (bot: tutorial)
2016-10-19 10:26:15 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'tu
torial.spiders', 'SPIDER_MODULES': ['tutorial.spiders'], 'ROBOTSTXT_OBEY': True,
 'BOT_NAME': 'tutorial'}
2016-10-19 10:26:16 [scrapy] INFO: Enabled extensions:
['scrapy.extensions.logstats.LogStats',
 'scrapy.extensions.telnet.TelnetConsole',
 'scrapy.extensions.corestats.CoreStats']
2016-10-19 10:26:17 [scrapy] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware',
 'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
 'scrapy.downloadermiddlewares.retry.RetryMiddleware',
 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
 'scrapy.downloadermiddlewares.chunked.ChunkedTransferMiddleware',
 'scrapy.downloadermiddlewares.stats.DownloaderStats']
2016-10-19 10:26:17 [scrapy] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
 'scrapy.spidermiddlewares.referer.RefererMiddleware',
 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
 'scrapy.spidermiddlewares.depth.DepthMiddleware']
2016-10-19 10:26:17 [scrapy] INFO: Enabled item pipelines:
[]
2016-10-19 10:26:17 [scrapy] INFO: Spider opened
2016-10-19 10:26:17 [scrapy] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 i
tems (at 0 items/min)
2016-10-19 10:26:17 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6023
2016-10-19 10:26:18 [scrapy] DEBUG: Crawled (404) <GET http://quotes.toscrape.co
m/robots.txt> (referer: None)
2016-10-19 10:26:18 [scrapy] DEBUG: Crawled (200) <GET http://quotes.toscrape.co
m/page/1/> (referer: None)
2016-10-19 10:26:18 [quotes] DEBUG: Saved file quotes-1.html
2016-10-19 10:26:18 [scrapy] DEBUG: Crawled (200) <GET http://quotes.toscrape.co
m/page/2/> (referer: None)
2016-10-19 10:26:19 [quotes] DEBUG: Saved file quotes-2.html
2016-10-19 10:26:19 [scrapy] INFO: Closing spider (finished)
2016-10-19 10:26:19 [scrapy] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 675,
 'downloader/request_count': 3,
 'downloader/request_method_count/GET': 3,
 'downloader/response_bytes': 5974,
 'downloader/response_count': 3,
 'downloader/response_status_count/200': 2,
 'downloader/response_status_count/404': 1,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2016, 10, 19, 4, 56, 19, 56000),
 'log_count/DEBUG': 6,
 'log_count/INFO': 7,
 'response_received_count': 3,
 'scheduler/dequeued': 2,
 'scheduler/dequeued/memory': 2,
 'scheduler/enqueued': 2,
 'scheduler/enqueued/memory': 2,
 'start_time': datetime.datetime(2016, 10, 19, 4, 56, 17, 649000)}
2016-10-19 10:26:19 [scrapy] INFO: Spider closed (finished)

C:\Users\MinorMiracles\Desktop\tutorial>python -m scrapy.cmdline shell 'http://q
uotes.toscrape.com/page/1/'
2016-10-19 11:11:40 [scrapy] INFO: Scrapy 1.2.0 started (bot: tutorial)
2016-10-19 11:11:40 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'tu
torial.spiders', 'ROBOTSTXT_OBEY': True, 'DUPEFILTER_CLASS': 'scrapy.dupefilters
.BaseDupeFilter', 'SPIDER_MODULES': ['tutorial.spiders'], 'BOT_NAME': 'tutorial'
, 'LOGSTATS_INTERVAL': 0}
2016-10-19 11:11:40 [scrapy] INFO: Enabled extensions:
['scrapy.extensions.telnet.TelnetConsole',
 'scrapy.extensions.corestats.CoreStats']
2016-10-19 11:11:40 [scrapy] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware',
 'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
 'scrapy.downloadermiddlewares.retry.RetryMiddleware',
 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
 'scrapy.downloadermiddlewares.chunked.ChunkedTransferMiddleware',
 'scrapy.downloadermiddlewares.stats.DownloaderStats']
2016-10-19 11:11:40 [scrapy] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
 'scrapy.spidermiddlewares.referer.RefererMiddleware',
 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
 'scrapy.spidermiddlewares.depth.DepthMiddleware']
2016-10-19 11:11:40 [scrapy] INFO: Enabled item pipelines:
[]
2016-10-19 11:11:40 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6023
2016-10-19 11:11:40 [scrapy] INFO: Spider opened
2016-10-19 11:11:42 [scrapy] DEBUG: Retrying <GET http://'http:/robots.txt> (fai
led 1 times): DNS lookup failed: address "'http:" not found: [Errno 11004] getad
drinfo failed.
2016-10-19 11:11:45 [scrapy] DEBUG: Retrying <GET http://'http:/robots.txt> (fai
led 2 times): DNS lookup failed: address "'http:" not found: [Errno 11004] getad
drinfo failed.
2016-10-19 11:11:47 [scrapy] DEBUG: Gave up retrying <GET http://'http:/robots.t
xt> (failed 3 times): DNS lookup failed: address "'http:" not found: [Errno 1100
4] getaddrinfo failed.
2016-10-19 11:11:47 [scrapy] ERROR: Error downloading <GET http://'http:/robots.
txt>: DNS lookup failed: address "'http:" not found: [Errno 11004] getaddrinfo f
ailed.
DNSLookupError: DNS lookup failed: address "'http:" not found: [Errno 11004] get
addrinfo failed.
2016-10-19 11:11:49 [scrapy] DEBUG: Retrying <GET http://'http://quotes.toscrape
.com/page/1/'> (failed 1 times): DNS lookup failed: address "'http:" not found:
[Errno 11004] getaddrinfo failed.
2016-10-19 11:11:51 [scrapy] DEBUG: Retrying <GET http://'http://quotes.toscrape
.com/page/1/'> (failed 2 times): DNS lookup failed: address "'http:" not found:
[Errno 11004] getaddrinfo failed.
2016-10-19 11:11:54 [scrapy] DEBUG: Gave up retrying <GET http://'http://quotes.
toscrape.com/page/1/'> (failed 3 times): DNS lookup failed: address "'http:" not
 found: [Errno 11004] getaddrinfo failed.
Traceback (most recent call last):
  File "C:\Python27\lib\runpy.py", line 174, in _run_module_as_main
    "__main__", fname, loader, pkg_name)
  File "C:\Python27\lib\runpy.py", line 72, in _run_code
    exec code in run_globals
  File "C:\Python27\lib\site-packages\scrapy\cmdline.py", line 161, in <module>
    execute()
  File "C:\Python27\lib\site-packages\scrapy\cmdline.py", line 142, in execute
    _run_print_help(parser, _run_command, cmd, args, opts)
  File "C:\Python27\lib\site-packages\scrapy\cmdline.py", line 88, in _run_print
_help
    func(*a, **kw)
  File "C:\Python27\lib\site-packages\scrapy\cmdline.py", line 149, in _run_comm
and
    cmd.run(args, opts)
  File "C:\Python27\lib\site-packages\scrapy\commands\shell.py", line 71, in run

    shell.start(url=url)
  File "C:\Python27\lib\site-packages\scrapy\shell.py", line 47, in start
    self.fetch(url, spider)
  File "C:\Python27\lib\site-packages\scrapy\shell.py", line 112, in fetch
    reactor, self._schedule, request, spider)
  File "C:\Python27\lib\site-packages\twisted\internet\threads.py", line 122, in
 blockingCallFromThread
    result.raiseException()
  File "<string>", line 2, in raiseException
twisted.internet.error.DNSLookupError: DNS lookup failed: address "'http:" not f
ound: [Errno 11004] getaddrinfo failed.

任何人都可以告诉我出了什么问题

1 个答案:

答案 0 :(得分:0)

使用替代命令python -m scrapy.cmdline <command> <arguments>(例如python -m scrapy.cmdline version -v)

感谢保罗