scrapyd deploy显示0蜘蛛

时间:2012-08-29 12:21:52

标签: python scrapy

我正在使用scrapy进行项目。我运行了以下命令来部署项目:

$scrapy deploy -l

然后我得到了以下o / p:

scrapistite http://localhost:6800/

$cat scrapy.cfg

[settings] 
default = scrapBib.settings

[deploy:scrapysite]  
url = http://localhost:6800/  
project = scrapBib

$scrapy deploy scrapysite -p scrapBib

'Building egg of scrapBib-1346242513
'build/lib.linux-x86_64-2.7' does not exist -- can't clean it

'build/bdist.linux-x86_64' does not exist -- can't clean it

'build/scripts-2.7' does not exist -- can't clean it

zip_safe flag not set; analyzing archive contents...

Deploying scrapBib-1346242513 to `http://localhost:6800/addversion.json`

2012-08-29 17:45:14+0530 [HTTPChannel,22,127.0.0.1] 127.0.0.1 - - [29/Aug/2012:12:15:13 

+0000] "POST /addversion.json HTTP/1.1" 200 79 "-" "Python-urllib/2.7"

Server response (200):

{"status": "ok", "project": "scrapBib", "version": "1346242513", "spiders": 0}

正如你所看到的,虽然我在项目/蜘蛛/文件夹中编写了3个蜘蛛,但是将蜘蛛设为0。因此,我无法使用curl请求开始抓取。请帮忙

1 个答案:

答案 0 :(得分:5)

我也曾经遇到过这个问题,做了两件事

1)从您当地的系统中删除project.egg-infobuildsetup.py

2)从服务器中删除所有已部署的版本。

然后尝试部署它将被修复...