在scrapy中,我使用XPATH来选择HTML并获得许多不必要的""而且,?

时间:2016-09-07 01:30:15

标签: python xpath scrapy

我遇到了解析http://so.gushiwen.org/view_20788.aspx

的问题

Inspector

这就是我想要的:

"detail_text": ["
    寥落古行宫,宫花寂寞红。白头宫女在,闲坐说玄宗。 
"],

但我得到了这个:

"detail_text": ["
    ", "
    ", "
    ", "
    ", "
    寥落古行宫,宫花寂寞红。", "白头宫女在,闲坐说玄宗。 
"],

这是我的代码:

#spider
class Tangshi3Spide(scrapy.Spider):
    name = "tangshi3"
    allowed_domains = ["gushiwen.org"]
    start_urls = [
        "http://so.gushiwen.org/view_20788.aspx"
    ]
    def __init__(self):
        self.items = []

    def parse(self, response):
        sel = Selector(response)
        sites = sel.xpath('//div[@class="main3"]/div[@class="shileft"]')
        domain = 'http://so.gushiwen.org'
        for site in sites:
            item = Tangshi3Item()
            item['detail_title'] = site.xpath('div[@class="son1"]/h1/text()').extract()
            item['detail_dynasty'] = site.xpath(
                u'div[@class="son2"]/p/span[contains(text(),"朝代:")]/parent::p/text()').extract()
            item['detail_translate_note_url'] = site.xpath('div[@id="fanyiShort676"]/p/a/u/parent::a/@href').extract()
            item['detail_appreciation_url'] = site.xpath('div[@id="shangxiShort787"]/p/a/u/parent::a/@href').extract()
            item['detail_background_url'] = site.xpath('div[@id="shangxiShort24492"]/p/a/u/parent::a/@href').extract()
            #question line
            item['detail_text'] = site.xpath('div[@class="son2"]/text()').extract()
            self.items.append(item)
        return self.items



#pipeline
class Tangshi3Pipeline(object):
    def __init__(self):
        self.file = codecs.open('tangshi_detail.json', 'w',     encoding='utf-8')

    def process_item(self, item, spider):
        line = json.dumps(dict(item))
        self.file.write(line.decode("unicode_escape"))
        return item

我怎样才能得到正确的文字?

1 个答案:

答案 0 :(得分:4)

您可以添加谓词[normalize-space()]以避免拾取空文本节点,即仅包含空格的节点:

item['detail_text'] = site.xpath('div[@class="son2"]/text()[normalize-space()]').extract()