Scrapy after-sibling strong:结果不对

时间:2013-10-09 04:04:17

标签: scrapy

我正在使用scrapy来提取部分地址,我需要有关语法的帮助。这是代码(如果这是无效代码,请不要道歉,不知道如何正确粘贴到问题中)。

<div class="result">
<h3>
<a href="/provider/service/xxxxx/">service name</a>
</h3>
<p>
"blah blah"
</p>
<strong>Physical Address</strong>
    "123 address street, someplace,  somewhere"
<br/>
<strong>Postcode</strong>
    "xxx"
<br/>
<strong>District/town</strong>
    "someplace"
<br/>
<strong>Region</strong>
    "someplace bigger"
<br/>
<strong>Phone</strong>
    "xx xxx xxxx"
<br/><strong>Fax Number</strong>
    "xx xxx xxxx"
<br/>
<!--strong>Email</strong-->
    <a href="#" onclick="window.location=('mail'+'to:'+'xxxxx'+''+'@'+'xxxx.xx.xx'+''); return false;">
"xxxxx"
<strong></strong>
"xxxxx.xx.xx"
</a>
<a rel="nofollow" class="printlist-add" href="/provider/print-list/add/xxxx/">Add to print list</a>        
</div>
<hr/>

这是我的蜘蛛

from scrapy.spider import BaseSpider
from scrapy.selector import HtmlXPathSelector
from test.items import TestItem

class NewSpider(BaseSpider):
name = "my_spider"

download_delay = 2

allowed_domains = ["website.com"]
start_urls = [
    "http://website.com/site1"
    ]

def parse(self, response):
    hxs = HtmlXPathSelector(response)
    sites = hxs.select('//*[@id="search-results"]/div')
    items = []
    for site in sites:
        item = WebhealthItem()
        item['practice'] = site.select('h3/a/text()').extract()
        item['url'] = site.select('h3/a/@href').extract()
        item['address1'] = site.select('strong[text() = "Physical Address"]/following-sibling::text()[1]')
        items.append(item)
    return items

item['address1'] = site.select('strong[text()="Physical Address"]/following-sibling::text()[1]')行返回字符串值[<HtmlXPathSelector xpath='strong[text()="Physical Address"]/following-sibling::text()[1]' data=u'\n\t\t\t 123 address street, someplace, some'>]。最后几个字符被剪裁。

当我添加.extract()时,值在cmd中显示为[u'\n\t\t\t 123 address street, someplace, somewhere'],但它们不会出现在输出表中。

我找了一个解决方案,我尝试了.select('text()').extract(),但这也不对。

任何帮助都会一如既往地受到高度赞赏。

PS。关于如何将页面源代码放入此论坛的问题的建议也将受到赞赏。感谢

2 个答案:

答案 0 :(得分:1)

使用您的示例网址,我建议您使用类似的内容,选择具有“结果”类的div

def parse(self, response):
    hxs = HtmlXPathSelector(response)
    results = hxs.select('id("search-results")/div[@class="result"]')
    items = []
    for result in results:
        item = WebhealthItem()
        item['practice'] = result.select('h3/a/text()').extract()[0]
        item['url'] = result.select('h3/a/@href').extract()[0]
        item['address1'] = map(
                unicode.strip,
                result.select('strong[text() = "Physical Address"]/following-sibling::text()[1]').extract()
            )[0]
        items.append(item)
    return items

答案 1 :(得分:1)

def caiqinghua_array_string_strip(array_string):
if(array_string == []):
    return ''
else:
    #print 'item::: ', array_string[0].strip()
    string = array_string[0].replace('\\r\\n', '')
    return string.strip()

def parse(self, response):
    hxs = HtmlXPathSelector(response)
    sites = hxs.select('//*[@id="search-results"]/div')
    items = []
    for site in sites:
        item = WebhealthItem()
        item['practice'] = site.select('h3/a/text()').extract()
        item['url'] = site.select('h3/a/@href').extract()
        address = site.select('strong[text() = "Physical Address"]/following-sibling::text()[1]')
        item['address1'] = caiqinghua_array_string_strip(address)
        items.append(item)
    return items

希望它可以帮到你。顺便说一下,建议你将items = []更改为items_list = []或其他,因为这些项目是scrapy的关键词,将来可能会发生冲突。