将web抓取提取限制为每个xpath项目一次,返回太多副本

时间:2015-02-10 12:56:03

标签: python xpath web-crawler scrapy

我使用以下基于scrapy的网络抓取脚本来提取this page的某些元素,然而,它会一遍又一遍地返回相同的信息,这会使我必须做的后期处理,是否有一个很好的方法将这些提取限制为每个xpath项目一次?

from scrapy.spider import BaseSpider
from scrapy.selector import HtmlXPathSelector
#from hz_sample.items import HzSampleItem

class DmozSpider(BaseSpider):
    name = "hzIII"
    allowed_domains = ["tool.httpcn.com"]
    start_urls = ["http://tool.httpcn.com/Html/Zi/28/PWMETBAZTBTBBDTB.shtml"]

    def parse(self, response):
        hxs = HtmlXPathSelector(response)
        titles = hxs.select("//p")

        for titles in titles:
        tester = titles.xpath('//*[@id="div_a1"]/div[3][1]').extract()
        #jester = titles.xpath('//*[@id="div_a1"]/div[2]').extract() 
            print tester

This是我的输出目前的样子(这是指向Dropbox文件的链接)。

输出应如下:

[u'<div class="content16">\r\n<span class="zi18b">\u25ce \u57fa\u672c\u89e3\u91ca</span><br>\r\n\u6bd6 <br>b\xec <br>\u8c28\u614e\uff1a\u60e9\u524d\u6bd6\u540e\uff08\u63a5\u53d7\u8fc7\u53bb\u5931\u8d25\u7684\u6559\u8bad\uff0c\u4ee5\u540e\u5c0f\u5fc3\u4e0d\u91cd\u72af\uff09\u3002 <br>\u64cd\u52b3\uff1a\u201c\u65e0\u6bd6\u4e8e\u6064\u201d\u3002 <br>\u53e4\u540c\u201c\u6ccc\u201d\uff0c\u6cc9\u6c34\u5192\u51fa\u6d41\u6dcc\u7684\u6837\u5b50\u3002 <br> <br>\u7b14\u753b\u6570\uff1a9\uff1b <br>\u90e8\u9996\uff1a\u6bd4\uff1b <br>\u7b14\u987a\u7f16\u53f7\uff1a153545434 <br><br><br>\r\n</div>'] [u'<div class="text16"><span class="zi18b">\u25ce \u5b57\u5f62\u7ed3\u6784</span><br>[ <span class="b">\u9996\u5c3e\u5206\u89e3\u67e5\u5b57</span> ]\uff1a\u6bd4\u5fc5(bibi)\n\u3000[ <span class="b">\u6c49\u5b57\u90e8\u4ef6\u6784\u9020</span> ]\uff1a\u6bd4\u5fc5\n<br>[ <span class="b">\u7b14\u987a\u7f16\u53f7</span> ]\uff1a153545434<br>\n[ <span class="b">\u7b14\u987a\u8bfb\u5199</span> ]\uff1a\u6a2a\u6298\u6487\u6298\u637a\u6298\u637a\u6487\u637a<br>\n<br><hr class="hr"></div>']

但目前的安排是返回想要输出太多次,如下:

[u'<div class="content16">\r\n<span class="zi18b">\u25ce \u57fa\u672c\u89e3\u91ca</span><br>\r\n\u6bd6 <br>b\xec <br>\u8c28\u614e\uff1a\u60e9\u524d\u6bd6\u540e\uff08\u63a5\u53d7\u8fc7\u53bb\u5931\u8d25\u7684\u6559\u8bad\uff0c\u4ee5\u540e\u5c0f\u5fc3\u4e0d\u91cd\u72af\uff09\u3002 <br>\u64cd\u52b3\uff1a\u201c\u65e0\u6bd6\u4e8e\u6064\u201d\u3002 <br>\u53e4\u540c\u201c\u6ccc\u201d\uff0c\u6cc9\u6c34\u5192\u51fa\u6d41\u6dcc\u7684\u6837\u5b50\u3002 <br> <br>\u7b14\u753b\u6570\uff1a9\uff1b <br>\u90e8\u9996\uff1a\u6bd4\uff1b <br>\u7b14\u987a\u7f16\u53f7\uff1a153545434 <br><br><br>\r\n</div>'] [u'<div class="text16"><span class="zi18b">\u25ce \u5b57\u5f62\u7ed3\u6784</span><br>[ <span class="b">\u9996\u5c3e\u5206\u89e3\u67e5\u5b57</span> ]\uff1a\u6bd4\u5fc5(bibi)\n\u3000[ <span class="b">\u6c49\u5b57\u90e8\u4ef6\u6784\u9020</span> ]\uff1a\u6bd4\u5fc5\n<br>[ <span class="b">\u7b14\u987a\u7f16\u53f7</span> ]\uff1a153545434<br>\n[ <span class="b">\u7b14\u987a\u8bfb\u5199</span> ]\uff1a\u6a2a\u6298\u6487\u6298\u637a\u6298\u637a\u6487\u637a<br>\n<br><hr class="hr"></div>']
[u'<div class="content16">\r\n<span class="zi18b">\u25ce \u57fa\u672c\u89e3\u91ca</span><br>\r\n\u6bd6 <br>b\xec <br>\u8c28\u614e\uff1a\u60e9\u524d\u6bd6\u540e\uff08\u63a5\u53d7\u8fc7\u53bb\u5931\u8d25\u7684\u6559\u8bad\uff0c\u4ee5\u540e\u5c0f\u5fc3\u4e0d\u91cd\u72af\uff09\u3002 <br>\u64cd\u52b3\uff1a\u201c\u65e0\u6bd6\u4e8e\u6064\u201d\u3002 <br>\u53e4\u540c\u201c\u6ccc\u201d\uff0c\u6cc9\u6c34\u5192\u51fa\u6d41\u6dcc\u7684\u6837\u5b50\u3002 <br> <br>\u7b14\u753b\u6570\uff1a9\uff1b <br>\u90e8\u9996\uff1a\u6bd4\uff1b <br>\u7b14\u987a\u7f16\u53f7\uff1a153545434 <br><br><br>\r\n</div>'] [u'<div class="text16"><span class="zi18b">\u25ce \u5b57\u5f62\u7ed3\u6784</span><br>[ <span class="b">\u9996\u5c3e\u5206\u89e3\u67e5\u5b57</span> ]\uff1a\u6bd4\u5fc5(bibi)\n\u3000[ <span class="b">\u6c49\u5b57\u90e8\u4ef6\u6784\u9020</span> ]\uff1a\u6bd4\u5fc5\n<br>[ <span class="b">\u7b14\u987a\u7f16\u53f7</span> ]\uff1a153545434<br>\n[ <span class="b">\u7b14\u987a\u8bfb\u5199</span> ]\uff1a\u6a2a\u6298\u6487\u6298\u637a\u6298\u637a\u6487\u637a<br>\n<br><hr class="hr"></div>']
[u'<div class="content16">\r\n<span class="zi18b">\u25ce \u57fa\u672c\u89e3\u91ca</span><br>\r\n\u6bd6 <br>b\xec <br>\u8c28\u614e\uff1a\u60e9\u524d\u6bd6\u540e\uff08\u63a5\u53d7\u8fc7\u53bb\u5931\u8d25\u7684\u6559\u8bad\uff0c\u4ee5\u540e\u5c0f\u5fc3\u4e0d\u91cd\u72af\uff09\u3002 <br>\u64cd\u52b3\uff1a\u201c\u65e0\u6bd6\u4e8e\u6064\u201d\u3002 <br>\u53e4\u540c\u201c\u6ccc\u201d\uff0c\u6cc9\u6c34\u5192\u51fa\u6d41\u6dcc\u7684\u6837\u5b50\u3002 <br> <br>\u7b14\u753b\u6570\uff1a9\uff1b <br>\u90e8\u9996\uff1a\u6bd4\uff1b <br>\u7b14\u987a\u7f16\u53f7\uff1a153545434 <br><br><br>\r\n</div>'] [u'<div class="text16"><span class="zi18b">\u25ce \u5b57\u5f62\u7ed3\u6784</span><br>[ <span class="b">\u9996\u5c3e\u5206\u89e3\u67e5\u5b57</span> ]\uff1a\u6bd4\u5fc5(bibi)\n\u3000[ <span class="b">\u6c49\u5b57\u90e8\u4ef6\u6784\u9020</span> ]\uff1a\u6bd4\u5fc5\n<br>[ <span class="b">\u7b14\u987a\u7f16\u53f7</span> ]\uff1a153545434<br>\n[ <span class="b">\u7b14\u987a\u8bfb\u5199</span> ]\uff1a\u6a2a\u6298\u6487\u6298\u637a\u6298\u637a\u6487\u637a<br>\n<br><hr class="hr"></div>']
[u'<div class="content16">\r\n<span class="zi18b">\u25ce \u57fa\u672c\u89e3\u91ca</span><br>\r\n\u6bd6 <br>b\xec <br>\u8c28\u614e\uff1a\u60e9\u524d\u6bd6\u540e\uff08\u63a5\u53d7\u8fc7\u53bb\u5931\u8d25\u7684\u6559\u8bad\uff0c\u4ee5\u540e\u5c0f\u5fc3\u4e0d\u91cd\u72af\uff09\u3002 <br>\u64cd\u52b3\uff1a\u201c\u65e0\u6bd6\u4e8e\u6064\u201d\u3002 <br>\u53e4\u540c\u201c\u6ccc\u201d\uff0c\u6cc9\u6c34\u5192\u51fa\u6d41\u6dcc\u7684\u6837\u5b50\u3002 <br> <br>\u7b14\u753b\u6570\uff1a9\uff1b <br>\u90e8\u9996\uff1a\u6bd4\uff1b <br>\u7b14\u987a\u7f16\u53f7\uff1a153545434 <br><br><br>\r\n</div>'] [u'<div class="text16"><span class="zi18b">\u25ce \u5b57\u5f62\u7ed3\u6784</span><br>[ <span class="b">\u9996\u5c3e\u5206\u89e3\u67e5\u5b57</span> ]\uff1a\u6bd4\u5fc5(bibi)\n\u3000[ <span class="b">\u6c49\u5b57\u90e8\u4ef6\u6784\u9020</span> ]\uff1a\u6bd4\u5fc5\n<br>[ <span class="b">\u7b14\u987a\u7f16\u53f7</span> ]\uff1a153545434<br>\n[ <span class="b">\u7b14\u987a\u8bfb\u5199</span> ]\uff1a\u6a2a\u6298\u6487\u6298\u637a\u6298\u637a\u6487\u637a<br>\n<br><hr class="hr"></div>']
[u'<div class="content16">\r\n<span class="zi18b">\u25ce \u57fa\u672c\u89e3\u91ca</span><br>\r\n\u6bd6 <br>b\xec <br>\u8c28\u614e\uff1a\u60e9\u524d\u6bd6\u540e\uff08\u63a5\u53d7\u8fc7\u53bb\u5931\u8d25\u7684\u6559\u8bad\uff0c\u4ee5\u540e\u5c0f\u5fc3\u4e0d\u91cd\u72af\uff09\u3002 <br>\u64cd\u52b3\uff1a\u201c\u65e0\u6bd6\u4e8e\u6064\u201d\u3002 <br>\u53e4\u540c\u201c\u6ccc\u201d\uff0c\u6cc9\u6c34\u5192\u51fa\u6d41\u6dcc\u7684\u6837\u5b50\u3002 <br> <br>\u7b14\u753b\u6570\uff1a9\uff1b <br>\u90e8\u9996\uff1a\u6bd4\uff1b <br>\u7b14\u987a\u7f16\u53f7\uff1a153545434 <br><br><br>\r\n</div>'] [u'<div class="text16"><span class="zi18b">\u25ce \u5b57\u5f62\u7ed3\u6784</span><br>[ <span class="b">\u9996\u5c3e\u5206\u89e3\u67e5\u5b57</span> ]\uff1a\u6bd4\u5fc5(bibi)\n\u3000[ <span class="b">\u6c49\u5b57\u90e8\u4ef6\u6784\u9020</span> ]\uff1a\u6bd4\u5fc5\n<br>[ <span class="b">\u7b14\u987a\u7f16\u53f7</span> ]\uff1a153545434<br>\n[ <span class="b">\u7b14\u987a\u8bfb\u5199</span> ]\uff1a\u6a2a\u6298\u6487\u6298\u637a\u6298\u637a\u6487\u637a<br>\n<br><hr class="hr"></div>']

2 个答案:

答案 0 :(得分:2)

认为你想要的是什么

 tester = titles.xpath('(//*[@id="div_a1"]/div[3])[1]').extract()

如果通过“限制提取”,您只想检索结果集的第一个节点。但不是这样做,也许有助于找到一个XPath表达式,它只返回1个结果,而不是总是选择第一个结果。


或者,有一种方法可以在Python方面解决这个问题。不太熟悉Python,但在我看来tester是一种数组结构,因此应该只能输出第一项,类似于

print tester[0]

编辑:同样,不熟悉Python,但如果在for循环中应用Xpath表达式,输出是多余的就不足为奇了,是吗?您正在选择所有p个元素,然后循环遍历所有元素,因此会多次提取//*[@id="div_a1"]/div[2]

def parse(self, response):
        hxs = HtmlXPathSelector(response)
        root = hxs.select("/")

        retester = root.xpath('//*[@id="div_a1"]/div[2]').extract()
        tester = root.xpath('//*[@id="div_a1"]/div[3]').extract() 
        print tester, retester

也许你甚至不必首先选择一些东西,并且可以直接将XPath表达式应用到hxs

答案 1 :(得分:0)

一个非常简单的解决方案是纠正您的解析功能。不需要外部循环,因为html代码中只有一个div_a1元素。

class Spider(BaseSpider):
    name = "hzIII"
    allowed_domains = ["tool.httpcn.com"]
    start_urls = ["http://tool.httpcn.com/Html/Zi/28/PWMETBAZTBTBBDTB.shtml"]
    def parse(self, response):
        print  response.xpath('//*[@id="div_a1"]/div[2]').extract()
        print  response.xpath('//*[@id="div_a1"]/div[3]').extract()      

注意: 关于发布的代码,循环中存在很大的错误。 for titles in titles将对所有元素进行循环。可能是你在任何情况下都想到for title in titles因为只有一个具有这种id的元素你不需要循环。