如何在Scrapy中使用for循环?

时间:2013-09-03 18:08:30

标签: python-2.7 for-loop web-scraping scrapy

我正在使用Scrapy进行项目,在这个项目中我将从xml中提取信息。

在xml文档中我想要实现for循环的格式:

<relatedPersonsList>
    <relatedPersonInfo>...</relatedPersonInfo>
    <relatedPersonInfo>
        <relatedPersonName>
            <firstName>Mark</firstName>
            <middleName>E.</middleName>
            <lastName>Lucas</lastName>
        </relatedPersonName>
        <relatedPersonAddress>
            <street1>1 IMATION WAY</street1>
            <city>OAKDALE</city>
            <stateOrCountry>MN</stateOrCountry>
            <stateOrCountryDescription>MINNESOTA</stateOrCountryDescription>
            <zipCode>55128</zipCode>
        </relatedPersonAddress>
        <relatedPersonRelationshipList>
            <relationship>Executive Officer</relationship>
            <relationship>Director</relationship>
        </relatedPersonRelationshipList>
        <relationshipClarification/>
    </relatedPersonInfo>
    <relatedPersonInfo>...</relatedPersonInfo>
    <relatedPersonInfo>...</relatedPersonInfo>
    <relatedPersonInfo>...</relatedPersonInfo>
    <relatedPersonInfo>...</relatedPersonInfo>
    <relatedPersonInfo>...</relatedPersonInfo>
</relatedPersonsList>

正如您在<relatedPersonsList>中看到的那样,您可以拥有多个<relatedPersonInfo>,当我尝试制作for循环时,我仍然只能获得第一个人的信息。

这是我的实际代码:

    for person in xxs.select('./relatedPersonsList/relatedPersonInfo'):
        item = Myform() #even if get rid of it I get the same result

        item["firstName"] = person.select('./relatedPersonName/firstName/text()').extract()[0]
        item["middleName"] = person.select('./relatedPersonName/middleName/text()')
        if item["middleName"]:
            item["middleName"] = item["middleName"].extract()[0]
        else:
            item["middleName"] = "NA"

这是我在蜘蛛上使用的代码:

from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.selector import HtmlXPathSelector
from scrapy.selector import XmlXPathSelector

from scrapy.http import Request
import urlparse
from formds.items import SecformD

class SecDform(CrawlSpider):
    name = "DFORM"

    allowed_domain = ["http://www..gov"]
    start_urls = [
        ""
    ]

    rules = (

        Rule(
            SgmlLinkExtractor(restrict_xpaths=["/html/body/div/table/tr/td[3]/a[2]"]),
            callback='parse_formd',
            #follow= True no need of follow thing
        ),
        Rule(
            SgmlLinkExtractor(restrict_xpaths=('/html/body/div/center[1]/a[contains(., "[NEXT]")]')),
            follow=True
        ),
    )

    def parse_formd(self, response):
        hxs = HtmlXPathSelector(response)
        sites = hxs.select('//*[@id="formDiv"]/div/table/tr[3]/td[3]/a/@href').extract()
        for site in sites:
            yield Request(url=urlparse.urljoin(response.url, site), callback=self.parse_xml_document)

    def parse_xml_document(self, response):
        xxs = XmlXPathSelector(response)
        item = SecformD()
        item["stateOrCountryDescription"] = xxs.select('./primaryIssuer/issuerAddress/stateOrCountryDescription/text()').extract()[0]
        item["zipCode"] = xxs.select('./primaryIssuer/issuerAddress/zipCode/text()').extract()[0]
        item["issuerPhoneNumber"] = xxs.select('./primaryIssuer/issuerPhoneNumber/text()').extract()[0]
        for person in xxs.select('./relatedPersonsList//relatedPersonInfo'):
            #item = SecDform()

            item["firstName"] = person.select('./relatedPersonName/firstName/text()').extract()[0]
            item["middleName"] = person.select('./relatedPersonName/middleName/text()')
            if item["middleName"]:
                item["middleName"] = item["middleName"].extract()[0]
            else:
                item["middleName"] = "NA"
        return item

我使用以下命令将信息提取到.json文件: scrapy crawl DFORM -o tes4.json -t json

1 个答案:

答案 0 :(得分:1)

尝试这样的事情:

def parse_xml_document(self, response):

    xxs = XmlXPathSelector(response)

    items = []

    # common field values
    stateOrCountryDescription = xxs.select('./primaryIssuer/issuerAddress/stateOrCountryDescription/text()').extract()[0]
    zipCode = xxs.select('./primaryIssuer/issuerAddress/zipCode/text()').extract()[0]
    issuerPhoneNumber = xxs.select('./primaryIssuer/issuerPhoneNumber/text()').extract()[0]

    for person in xxs.select('./relatedPersonsList//relatedPersonInfo'):

        # instantiate one item per loop iteration
        item = SecformD()

        # save common parameters
        item["stateOrCountryDescription"] = stateOrCountryDescription
        item["zipCode"] = zipCode
        item["issuerPhoneNumber"] = issuerPhoneNumber

        item["firstName"] = person.select('./relatedPersonName/firstName/text()').extract()[0]
        item["middleName"] = person.select('./relatedPersonName/middleName/text()')
        if item["middleName"]:
            item["middleName"] = item["middleName"].extract()[0]
        else:
            item["middleName"] = "NA"

        items.append(item)

    return items