如何找到xpath需要的单词?

时间:2014-12-19 09:24:17

标签: xpath scrapy

我正在使用scrapy来抓取一个网站,但我不知道如何解析并找到单词。 以下是网站,我想找到“你好我在这里”。

这是我的xpath代码:

//div[@class='sort_left']/p/strong/a/href/text()

Html部分:

<div class="sort hottest_dishes1">
    <ul class="sort_title">
        <li class="current"><a href="/list_rest.php?a=75&s=1">按默认排序</a></li>
        <li class=""><a href="/list_rest.php?a=75&s=2">按人气排序</a></li>
    </ul>

    <ol class="sort_content">
        <li class="show">
            <div class="sort_yi">                              
                <div class="sort_left">
                    <p class="li_title">
                        <strong class="span_left ">
                            <a href="/rest/75/1879">hello I'm here<span class="restaurant_list_hot"></span></a>
                            <span> (川菜) </span>
                        </strong>
                        <span class="span_d_right3" title="馋嘴牛蛙特价只要9.9元,每单限点1份">馋嘴牛蛙特价9块9</span>
                    </p>
                    <p class="consume">
                        <strong>人均消费:</strong>
                        <b><span>¥70</span>元</b>
                        <a href="http://www.dianping.com/shop/2271520" target="_blank">看网友点评</a>
                    </p>
                    <p class="sign">
                        <strong>招牌菜:</strong>
                        <span>水煮鲶鱼 馋嘴牛蛙 酸梅汤 钵钵鸡 香辣土豆丝 毛血旺 香口猪手 ……</span>
                    </p> 
                </div>
                <div class="sort_right">
                    <a href="/rest/75/1879">看菜谱</a>
                </div>
                <div class="sort_all"  >
                    <strong>送达时间:</strong><span>60分钟</span>                                    
                </div>
            </div>

我在shell中使用response.css是对的,但是在scrapy中,它什么也没有返回,我写的代码错了吗? 以下是我的代码:

def parse_torrent(self, response):
    torrent = TorrentItem()
    torrent['url'] = response.url
    torrent['name'] = response.xpath("//div[@class='sort_left']/p/strong/a[1]").extract()[1]
    torrent['description'] = response.xpath("//div[@id='list_content']/div/div/ol/li/div/div/p/strong[1]/following-sibling::span[1]").extract()
    torrent['size'] = response.xpath("//div[@id='list_content']/div/div/ol/li/div/div/p/span[1]").extract()
    return torrent

强文

3 个答案:

答案 0 :(得分:0)

我在HTML摘录中看不到具有值<div>的属性的'list_content' - 所以[@id='list_content']谓词会过滤掉所有内容,无论你的其他内容是什么XPath表达式是。表达式评估的结果是空序列。

问题编辑后:

HTML中没有<href>元素,因此.../a/href子表达式不会选择任何内容。
href<a>的一个属性 - 使用.../a/@href来处理href属性内容。

但是如果你仍然想找到你好我在这里&#39;文本,然后您需要访问<a>元素内容 - 使用.../a/text()

答案 1 :(得分:0)

这可以是您需要做的一个示例:

def parse_torrent(self, response):
    print response.xpath('//div[@class="sort_left"]/p/strong/a/text()').extract()[0]

输出:

2014-12-19 10:58:28+0100 [scrapy] INFO: Scrapy 0.24.4 started (bot: skema_crawler)
2014-12-19 10:58:28+0100 [scrapy] INFO: Optional features available: ssl, http11
2014-12-19 10:58:28+0100 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'skema_crawler.spiders', 'SPIDER_MODULES': ['skema_crawler.spiders'], 'BOT_NAME': 'skema_crawler'}
2014-12-19 10:58:28+0100 [scrapy] INFO: Enabled extensions: LogStats, TelnetConsole, CloseSpider, WebService, CoreStats, SpiderState
2014-12-19 10:58:29+0100 [scrapy] INFO: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, RedirectMiddleware, CookiesMiddleware, ChunkedTransferMiddleware, DownloaderStats
2014-12-19 10:58:29+0100 [scrapy] INFO: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2014-12-19 10:58:29+0100 [scrapy] INFO: Enabled item pipelines:
2014-12-19 10:58:29+0100 [linkedin] INFO: Spider opened
2014-12-19 10:58:29+0100 [linkedin] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2014-12-19 10:58:29+0100 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6023
2014-12-19 10:58:29+0100 [scrapy] DEBUG: Web service listening on 127.0.0.1:6080
2014-12-19 10:58:29+0100 [linkedin] DEBUG: Crawled (200) <GET file:///C:/1.html> (referer: None)
hello I'm here
2014-12-19 10:58:29+0100 [linkedin] INFO: Closing spider (finished)
2014-12-19 10:58:29+0100 [linkedin] INFO: Dumping Scrapy stats:
        {'downloader/request_bytes': 232,
         'downloader/request_count': 1,
         'downloader/request_method_count/GET': 1,
         'downloader/response_bytes': 1599,
         'downloader/response_count': 1,
         'downloader/response_status_count/200': 1,
         'finish_reason': 'finished',
         'finish_time': datetime.datetime(2014, 12, 19, 9, 58, 29, 241000),
         'log_count/DEBUG': 3,
         'log_count/INFO': 7,
         'response_received_count': 1,
         'scheduler/dequeued': 1,
         'scheduler/dequeued/memory': 1,
         'scheduler/enqueued': 1,
         'scheduler/enqueued/memory': 1,
         'start_time': datetime.datetime(2014, 12, 19, 9, 58, 29, 213000)}
2014-12-19 10:58:29+0100 [linkedin] INFO: Spider closed (finished)

您可以看到hello I'm here已出现。

你指的是

response.xpath("//div[@class='sort_left']/p/strong/a[1]").extract()[1]

您需要在xpath中添加text(),并且a内部有span,您需要获取元素[0]而不是[1]。那么你需要把它改成

response.xpath("//div[@class='sort_left']/p/strong/a/text()").extract()[0]

答案 2 :(得分:0)

我个人觉得css选择器比使用xpath定位内容要容易得多。对于您在抓取指定文档时获得的响应对象,为什么不尝试response.css('p[class="li_title"] a::text')[0].extract()

(我测试了它,它在scrapy shell中工作。输出:u"hello I'm here"