Scrapy从网页采取文本

时间:2015-11-17 20:32:42

标签: python xpath scrapy

我正在尝试制作一个scrapy蜘蛛文件,用于从网页中提取主要文本块。我成功地使用了scapy教程但是当我来使用我自己的网站时,我无法让它工作。我认为它可能与我的xpaths有关,但我没有足够的知识来修复它。

由于

# -*- coding: utf-8 -*-
import scrapy

from scrapy.spiders import Spider
from scrapy.selector import Selector
from Tutorial.items import DmozItem

class DmozSpider(scrapy.Spider):
    name = "dmoz"
    allowed_domains = ["http://www.inkstudents.co.uk/"]
    start_urls = [
'http://www.inkstudents.co.uk/article/is-venice-sinking', 
'http://www.inkstudents.co.uk/article/what-do-romeo-and-juliet-and-the-eurozone-have-in-common'
]

def parse(self, response):
    for sel in response.xpath('//ul/li'):
        item = DmozItem()
        #item['title'] = sel.xpath('//body/text()').extract()
        item['link'] = sel.xpath('//*[@id="mainCont"]/div[3]/div[2]').extract()
        #item['desc'] = sel.xpath('/html').extract()
        yield item

我也有一个items.py文件。

这是日志:

2015-11-17 21:09:45 [scrapy] INFO: Scrapy 1.0.3 started (bot: Tutorial)
2015-11-17 21:09:45 [scrapy] INFO: Optional features available: ssl,     http11
2015-11-17 21:09:45 [scrapy] INFO: Overridden settings:   {'NEWSPIDER_MODULE': 'Tutorial.spiders', 'FEED_FORMAT': 'json',     'SPIDER_MODULES': ['Tutorial.spiders'], 'FEED_URI': 'test4.json', 'BOT_NAME':    'Tutorial'}
2015-11-17 21:09:46 [scrapy] INFO: Enabled extensions: CloseSpider,   FeedExporter, TelnetConsole, LogStats, CoreStats, SpiderState
2015-11-17 21:09:46 [scrapy] INFO: Enabled downloader middlewares:   HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware,    RetryMiddleware, DefaultHeadersMiddleware, MetaRefreshMiddleware,   HttpCompressionMiddleware, RedirectMiddleware, CookiesMiddleware,      ChunkedTransferMiddleware, DownloaderStats
2015-11-17 21:09:46 [scrapy] INFO: Enabled spider middlewares:   HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware,    UrlLengthMiddleware, DepthMiddleware
2015-11-17 21:09:46 [scrapy] INFO: Enabled item pipelines: 
2015-11-17 21:09:46 [scrapy] INFO: Spider opened
2015-11-17 21:09:46 [scrapy] INFO: Crawled 0 pages (at 0 pages/min),   scraped 0 items (at 0 items/min)
2015-11-17 21:09:46 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6023
2015-11-17 21:09:46 [scrapy] DEBUG: Crawled (200) <GET   http://www.inkstudents.co.uk/article/what-do-romeo-and-juliet-and-the-  eurozone-have-in-common> (referer: None)
2015-11-17 21:09:46 [scrapy] DEBUG: Crawled (200) <GET   http://www.inkstudents.co.uk/article/is-venice-sinking> (referer: None)
2015-11-17 21:09:46 [scrapy] INFO: Closing spider (finished)
2015-11-17 21:09:47 [scrapy] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 527,
 'downloader/request_count': 2,
 'downloader/request_method_count/GET': 2,
 'downloader/response_bytes': 12034,
 'downloader/response_count': 2,
 'downloader/response_status_count/200': 2,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2015, 11, 17, 21, 9, 46, 999995),
 'log_count/DEBUG': 3,
 'log_count/INFO': 7,
 'response_received_count': 2,
 'scheduler/dequeued': 2,
 'scheduler/dequeued/memory': 2,
 'scheduler/enqueued': 2,
 'scheduler/enqueued/memory': 2,
 'start_time': datetime.datetime(2015, 11, 17, 21, 9, 46, 430059)}
2015-11-17 21:09:47 [scrapy] INFO: Spider closed (finished)

1 个答案:

答案 0 :(得分:0)

我很确定你的xpath存在问题。

通过与响应进行交互,我可以从网站上的articleContent获取所有内容。

def parse(self, response):
    for sel in response.xpath('//div[@class="articleContent"]/p/text()'):
        item = DmozItem()
        item['link'] = sel
        yield item

不知道这是否正是你想要的,但是它应该为xpath提供一个良好的开端。

希望这有帮助!

注意:我将文本放在DmozItem的链接项中,因为这是您在示例中所拥有的,但它不是链接