Scrapy部署停止工作

时间:2013-06-17 10:15:06

标签: python scrapy scrapyd

我正在尝试使用scrapyd部署scrapy项目,但它给了我错误......

sudo scrapy deploy default -p eScraper
Building egg of eScraper-1371463750
'build/scripts-2.7' does not exist -- can't clean it
zip_safe flag not set; analyzing archive contents...
eScraperInterface.settings: module references __file__
eScraper.settings: module references __file__
Deploying eScraper-1371463750 to http://localhost:6800/addversion.json
Server response (200):
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/scrapyd/webservice.py", line 18, in render
    return JsonResource.render(self, txrequest)
  File "/usr/local/lib/python2.7/dist-packages/scrapy/utils/txweb.py", line 10, in render
    r = resource.Resource.render(self, txrequest)
  File "/usr/local/lib/python2.7/dist-packages/twisted/web/resource.py", line 250, in render
    return m(request)
  File "/usr/local/lib/python2.7/dist-packages/scrapyd/webservice.py", line 66, in render_POST
    spiders = get_spider_list(project)
  File "/usr/local/lib/python2.7/dist-packages/scrapyd/utils.py", line 65, in get_spider_list
    raise RuntimeError(msg.splitlines()[-1])
RuntimeError: OSError: [Errno 20] Not a directory: '/tmp/eScraper-1371463750-Lm8HLh.egg/images'

早些时候我能够正确部署项目但现在不能...... 但是如果使用scrapy爬行spiderName爬行蜘蛛则没有问题...... 有人可以帮助我....

1 个答案:

答案 0 :(得分:1)

尝试以下两件事:  1.可能你已经部署了太多版本,尝试删除一些旧版本  2.在部署之前,删除构建文件夹和设置文件

就运行爬虫问题而言,如果您运行任意名称甚至尚未部署的爬虫,则scrapyd将返回“OK”响应以及作业ID。