从给定的URL中抓取数据并使用scrapy将其放入文件中

时间:2016-06-09 04:57:11

标签: web-scraping scrapy screen-scraping scrapy-spider

我试图深深废弃一个给定的网站并从各个页面抓取文本。我正在使用scrapy来废弃网站

这是我如何运行蜘蛛 scrapy crawl stack_crawler -o items.json

item.json file coming empty

这是spider code_snap

# -*- coding: utf-8 -*-
import scrapy
from scrapy.linkextractors import LinkExtractor
from scrapy.spiders import CrawlSpider, Rule

#from tutorial.items import TutorialItem

from tutorial.items import DmozItem

class StackCrawlerSpider(CrawlSpider):
    name = 'stack_crawler'
    allowed_domains = ['http://www.dmoz.org']
    start_urls = ['http://www.dmoz.org/']

    rules = (
        Rule(LinkExtractor(allow=r'Items/'), callback='parse_item', follow=True),
    )

    def parse_item(self, response):
        i = TutorialItem()
        i['domain_id'] = response.xpath('//input[@id="sid"]/@value').extract()
        i['name'] = response.xpath('//div[@id="name"]').extract()
        i['description'] = response.xpath('//div[@id="description"]').extract()
        return i

这是我在运行蜘蛛爬行时得到的日志

dummy-MacBook-Pro:spiders Dummy$ scrapy crawl stack_crawler -o items.json
2016-06-09 10:22:23 [scrapy] INFO: Scrapy 1.1.0 started (bot: tutorial)
2016-06-09 10:22:23 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'tutorial.spiders', 'FEED_URI': 'items.json', 'SPIDER_MODULES': ['tutorial.spiders'], 'BOT_NAME': 'tutorial', 'ROBOTSTXT_OBEY': True, 'FEED_FORMAT': 'json'}
2016-06-09 10:22:23 [scrapy] INFO: Enabled extensions:
['scrapy.extensions.feedexport.FeedExporter',
 'scrapy.extensions.logstats.LogStats',
 'scrapy.extensions.telnet.TelnetConsole',
 'scrapy.extensions.corestats.CoreStats']
2016-06-09 10:22:23 [scrapy] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware',
 'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
 'scrapy.downloadermiddlewares.retry.RetryMiddleware',
 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
 'scrapy.downloadermiddlewares.chunked.ChunkedTransferMiddleware',
 'scrapy.downloadermiddlewares.stats.DownloaderStats']
2016-06-09 10:22:23 [scrapy] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
 'scrapy.spidermiddlewares.referer.RefererMiddleware',
 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
 'scrapy.spidermiddlewares.depth.DepthMiddleware']
2016-06-09 10:22:23 [scrapy] INFO: Enabled item pipelines:
[]
2016-06-09 10:22:23 [scrapy] INFO: Spider opened
2016-06-09 10:22:23 [scrapy] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2016-06-09 10:22:23 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6024
2016-06-09 10:22:24 [scrapy] DEBUG: Crawled (200) <GET http://www.dmoz.org/robots.txt> (referer: None)
2016-06-09 10:22:24 [scrapy] DEBUG: Crawled (200) <GET http://www.dmoz.org/> (referer: None)
2016-06-09 10:22:24 [scrapy] INFO: Closing spider (finished)
2016-06-09 10:22:24 [scrapy] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 430,
 'downloader/request_count': 2,
 'downloader/request_method_count/GET': 2,
 'downloader/response_bytes': 5694,
 'downloader/response_count': 2,
 'downloader/response_status_count/200': 2,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2016, 6, 9, 4, 52, 24, 862900),
 'log_count/DEBUG': 3,
 'log_count/INFO': 7,
 'response_received_count': 2,
 'scheduler/dequeued': 1,
 'scheduler/dequeued/memory': 1,
 'scheduler/enqueued': 1,
 'scheduler/enqueued/memory': 1,
 'start_time': datetime.datetime(2016, 6, 9, 4, 52, 23, 483092)}
2016-06-09 10:22:24 [scrapy] INFO: Spider closed (finished)

物品代码快照

import scrapy
class DmozItem(scrapy.Item):
    title = scrapy.Field()
    link = scrapy.Field()
    desc = scrapy.Field()

任何人都可以帮我弄清楚我在代码级别做错了什么来获取数据。

2 个答案:

答案 0 :(得分:2)

我认为你是scrapy的新手,你在代码中犯了很多错误

1.我们在scrapy中使用默认函数parse或start_product_requests,因此您可以避免在那里使用LinkExtractor。使用parse函数并直接获取start_urls响应。

2.您在items.py中定义了一个项目并使用了另一个项目。所以字段名称不同,在那里发生冲突。

3.您获取字段值的路径是正确的。

你必须尝试这个

spider code_snap

import scrapy

from lxml import html
from scrapy.spiders import CrawlSpider, Rule
from tutorial.items import DmozItem

class StackCrawlerSpider(CrawlSpider):
    name = 'stack_crawler'
    allowed_domains = ['http://www.dmoz.org']
    start_urls = ['http://www.dmoz.org/']

    def parse(self, response):  
        doc = html.fromstring(response.body)
        i = DmozItem()
        i['title'] = doc.xpath('//meta[@property="og:title"]/@content')
        i['link'] = response.url
        i['desc'] = doc.xpath('//meta[@name="description"]/@content')
        yield i

物品代码快照

import scrapy
class DmozItem(scrapy.Item):
    title = scrapy.Field()
    link = scrapy.Field()
    desc = scrapy.Field()

这有效。

答案 1 :(得分:0)

dmoz.org与&#34; Items&#34;没有任何链接在href中,因此您的规则找不到任何链接,这就是您的items.json文件为空的原因。