Scrapy教程(noob) - 抓取了0个页面

时间:2014-01-26 12:37:55

标签: python scrapy data-extraction

我一直在尝试遵循Scrapy教程(在非常开始时)并且在项目顶层运行命令(即scrapy.cfg的级别)后,我得到以下输出:

 mikey@ubuntu:~/scrapy/tutorial$ scrapy crawl dmoz
/usr/lib/pymodules/python2.7/scrapy/settings/deprecated.py:26: ScrapyDeprecationWarning: You are using the following settings which are deprecated or obsolete (ask scrapy-users@googlegroups.com for alternatives):
    BOT_VERSION: no longer used (user agent defaults to Scrapy now)
  warnings.warn(msg, ScrapyDeprecationWarning)
2014-01-26 04:17:06-0800 [scrapy] INFO: Scrapy 0.22.0 started (bot: tutorial)
2014-01-26 04:17:06-0800 [scrapy] INFO: Optional features available: ssl, http11, boto, django
2014-01-26 04:17:06-0800 [scrapy] INFO: Overridden settings: {'DEFAULT_ITEM_CLASS': 'tutorial.items.TutorialItem', 'NEWSPIDER_MODULE': 'tutorial.spiders', 'SPIDER_MODULES': ['tutorial.spiders'], 'USER_AGENT': 'tutorial/1.0', 'BOT_NAME': 'tutorial'}
2014-01-26 04:17:06-0800 [scrapy] INFO: Enabled extensions: LogStats, TelnetConsole, CloseSpider, WebService, CoreStats, SpiderState
2014-01-26 04:17:06-0800 [scrapy] INFO: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, RedirectMiddleware, CookiesMiddleware, ChunkedTransferMiddleware, DownloaderStats
2014-01-26 04:17:06-0800 [scrapy] INFO: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2014-01-26 04:17:06-0800 [scrapy] INFO: Enabled item pipelines: 
2014-01-26 04:17:06-0800 [dmoz] INFO: Spider opened
2014-01-26 04:17:06-0800 [dmoz] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2014-01-26 04:17:06-0800 [scrapy] DEBUG: Telnet console listening on 0.0.0.0:6023
2014-01-26 04:17:06-0800 [scrapy] DEBUG: Web service listening on 0.0.0.0:6080
2014-01-26 04:17:06-0800 [dmoz] DEBUG: Crawled (200) <GET http://www.dmoz.org/Computers/Programming/Languages/Python/Resources/> (referer: None)
2014-01-26 04:17:07-0800 [dmoz] DEBUG: Crawled (200) <GET http://www.dmoz.org/Computers/Programming/Languages/Python/Books/> (referer: None)
2014-01-26 04:17:07-0800 [dmoz] INFO: Closing spider (finished)
2014-01-26 04:17:07-0800 [dmoz] INFO: Dumping Scrapy stats:
    {'downloader/request_bytes': 472,
     'downloader/request_count': 2,
     'downloader/request_method_count/GET': 2,
     'downloader/response_bytes': 14888,
     'downloader/response_count': 2,
     'downloader/response_status_count/200': 2,
     'finish_reason': 'finished',
     'finish_time': datetime.datetime(2014, 1, 26, 12, 17, 7, 63261),
     'log_count/DEBUG': 4,
     'log_count/INFO': 7,
     'response_received_count': 2,
     'scheduler/dequeued': 2,
     'scheduler/dequeued/memory': 2,
     'scheduler/enqueued': 2,
     'scheduler/enqueued/memory': 2,
     'start_time': datetime.datetime(2014, 1, 26, 12, 17, 6, 567929)}
2014-01-26 04:17:07-0800 [dmoz] INFO: Spider closed (finished)
mikey@ubuntu:~/scrapy/tutorial$ 

(即0页每秒抓0页!!!!!!!!!!!!!!)

到目前为止

故障排除: 1)检查items.py和dmoz_spider.py的语法(复制和粘贴以及手动输入) 2)在线检查问题,但看不到有类似问题的其他人 3)检查文件夹结构等,确保从正确的位置运行命令 4)升级到最新版本的scrapy

有什么建议吗?我的代码与示例

完全相同

dmoz_spider.py是......

from scrapy.spider import Spider

class DmozSpider(Spider):
    name = "dmoz"
    allowed_domains = ["dmoz.org"]
    start_urls = [
        "http://www.dmoz.org/Computers/Programming/Languages/Python/Books/",
        "http://www.dmoz.org/Computers/Programming/Languages/Python/Resources/"
    ]

    def parse(self, response):
        filename = response.url.split("/")[-2]
        open(filename, 'wb').write(response.body)

和items.py ......

from scrapy.item import Item, Field

class DmozItem(Item):
    title = Field()
    link = Field()
    desc = Field()

3 个答案:

答案 0 :(得分:2)

首先,您应找出要抓取的内容

您将两个开始网址传递给了scrapy,因此它抓取了它们,但找不到更多网址。

该页面上的所有图书链接均未达到allowed_domains dmoz.org

您可以yield Request([next url])抓取更多链接,next url可以从回复中进行解析。

或继承CrawlSpider并指定this example等规则。

答案 1 :(得分:1)

这条线经常打印,首先当蜘蛛打开时,你的代码没有问题,你还没有实现其他任何东西

答案 2 :(得分:0)

您必须yield项才能保存它,然后必须yield Request(<next_url>)移到新页面。

您可以检查此博客以开始使用scrapy

https://www.inkoop.io/blog/web-scraping-using-python-and-scrapy/