如何使用scrapy

时间:2016-03-29 06:06:07

标签: python html web-scraping scrapy

我想使用具有以下HTML结构的python中的scrapy从特定网站中提取URL

<div class="comic-table">
<div id="comic">
		<img src="http://demowebsite.com/uploads/image1" alt="" title="">
		<img src="http://demowebsite.com/uploads/image2" alt="" title="">
</div>
</div>

这是我写的scrapy代码:

import scrapy
from scrapy.contrib.spiders import Rule, CrawlSpider
from scrapy.contrib.linkextractors import LinkExtractor
from Pencils.items import PencilsItem

class Spider(CrawlSpider):
name = 'pencil'
allowed_domains = ['demowebsite.com']
start_urls = ['http://demowebsite.com']
rules = [Rule(LinkExtractor(allow=['/uploads/.*']), 'parse_pencil')]

def parse_pencil(self, response):

    image = PencilsItem()
    rel = response.xpath("WHAT_SHOULD_I_PUT_HERE").extract()
    image['image_urls'] = ['http:'+rel[0]]
    return image

我应该在response.xpath字段中添加什么。

P.S我是HTML和Python的初学者

2 个答案:

答案 0 :(得分:2)

试试这个:

    '//div[@id="comic"]/img'

//   =>  search the whole html page
@    =>  attribute 

该xpath会查找具有名为<div>的属性的所有id标记,该属性等于"comic"(应该只有一个<div>标记,其属性为{{ 1}}因为id应该是唯一的),并在其中提取id="comic"标签。

使用scrapy,您可以执行以下操作来获取所有<img>标记:

<img>

事实上,如果你想要的只是import scrapy class TestSpider(scrapy.Spider): name = "my_spider" start_urls = [ "file:///Users/7stud/python_programs/scrapy_stuff/html_files/html.html" ] def parse(self, response): for selector in response.xpath('//div[@id="comic"]/img'): src = selector.xpath('@src').extract() print src[0] --output:-- (scrapy_env)~/python_programs/scrapy_stuff$ scrapy crawl my_spider 2016-03-29 02:19:09 [scrapy] INFO: Scrapy 1.0.5 started (bot: scrapy_stuff) 2016-03-29 02:19:09 [scrapy] INFO: Optional features available: ssl, http11 2016-03-29 02:19:09 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'scrapy_stuff.spiders', 'SPIDER_MODULES': ['scrapy_stuff.spiders'], 'BOT_NAME': 'scrapy_stuff'} 2016-03-29 02:19:09 [scrapy] INFO: Enabled extensions: CloseSpider, TelnetConsole, LogStats, CoreStats, SpiderState 2016-03-29 02:19:09 [scrapy] INFO: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, RedirectMiddleware, CookiesMiddleware, ChunkedTransferMiddleware, DownloaderStats 2016-03-29 02:19:09 [scrapy] INFO: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware 2016-03-29 02:19:09 [scrapy] INFO: Enabled item pipelines: 2016-03-29 02:19:09 [scrapy] INFO: Spider opened 2016-03-29 02:19:09 [scrapy] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min) 2016-03-29 02:19:09 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6023 2016-03-29 02:19:09 [scrapy] DEBUG: Crawled (200) <GET file:///Users/7stud/python_programs/scrapy_stuff/html_files/html.html> (referer: None) http://demowebsite.com/uploads/image1 http://demowebsite.com/uploads/image2 2016-03-29 02:19:09 [scrapy] INFO: Closing spider (finished) 2016-03-29 02:19:09 [scrapy] INFO: Dumping Scrapy stats: {'downloader/request_bytes': 263, 'downloader/request_count': 1, 'downloader/request_method_count/GET': 1, 'downloader/response_bytes': 243, 'downloader/response_count': 1, 'downloader/response_status_count/200': 1, 'finish_reason': 'finished', 'finish_time': datetime.datetime(2016, 3, 29, 8, 19, 9, 251971), 'log_count/DEBUG': 2, 'log_count/INFO': 7, 'response_received_count': 1, 'scheduler/dequeued': 1, 'scheduler/dequeued/memory': 1, 'scheduler/enqueued': 1, 'scheduler/enqueued/memory': 1, 'start_time': datetime.datetime(2016, 3, 29, 8, 19, 9, 139531)} 2016-03-29 02:19:09 [scrapy] INFO: Spider closed (finished) (scrapy_env)~/python_programs/scrapy_stuff$ 标签中的src属性,你可以直接使用以下xpath获取src属性:

<img>
  

P.S我是HTML和Python的初学者

xml和xpath怎么样?您真正需要探索的主题是xpath。但是,我建议作为html和xpath的初学者,你应该从BeautifulSoup开始抓取网页。

答案 1 :(得分:0)

为了获得所有链接,您应该使用

response.xpath("//div[@id='comic']/img/@src").extract()

你的代码看起来像

import scrapy
from scrapy.contrib.spiders import Rule, CrawlSpider
from scrapy.contrib.linkextractors import LinkExtractor
from stackoverflow.items import PencilsItem

class Spider(CrawlSpider):
    name = 'pencil'
    allowed_domains = ['demowebsite.com']
    start_urls = ['http://demowebsite.com']
    rules = [Rule(LinkExtractor(allow=['/uploads/.*']), 'parse_pencil')]

    def parse_pencil(self, response):
        item = PencilsItem()
        item['image_urls'] = response.xpath("//div[@id='comic']/img/@src").extract()
        yield item

如果img src不包含域

,请使用此代码
from urlparse import urlparse
parsed_uri = urlparse(response.url)
domain = '{uri.scheme}://{uri.netloc}/'.format(uri=parsed_uri)
links = [domain+link for link in response.xpath("//div[@id='comic']/img/@src").extract()]