Scrapy-抓住所有产品详细信息

时间:2019-01-09 06:53:06

标签: scrapy scrapy-spider

我需要从以下页面获取所有产品详细信息(带有绿色勾号):https://sourceforge.net/software/product/Budget-Maestro/

    divs = response.xpath("//section[@class='row psp-section m-section-comm-details m-section-emphasized grey']/div[@class='list-outer column']/div")
    for div in divs:
        detail = div.xpath("./h3/text()").extract_first().strip() + ":"
        if detail!="Company Information:":
            divs2 = div.xpath(".//div[@class='list']/div")
            for div2 in divs2:
                dd = [val for val in div2.xpath("./text()").extract() if val.strip('\n').strip().strip('\n')]
                for d in dd:
                    detail = detail + d + ","
            detail = detail.strip(",")
            product_details = product_details + detail + "|"
    product_details = product_details.strip("|")

但是它也为我提供了\ n的一些功能。而且我敢肯定,必须有一个更聪明,更短的方法来做到这一点。

2 个答案:

答案 0 :(得分:1)

如果仅需要“产品详细信息”中的数据,请检查以下内容:

In [6]: response.css("section.m-section-comm-details div.list svg").xpath('.//following-sibling::text()').extract()
Out[6]: 
[u' SaaS\n                        ',
 u' Windows\n                        ',
 u' Live Online ',
 u' In Person ',
 u' Online ',
 u' Business Hours ']

答案 1 :(得分:1)

使用此,

divs = [div.strip() for div in response.xpath('//*[contains(@class, "has-feature")]/text()').extract() if div.strip()]

现在Div是

[u'Accounts Payable', u'Accounts Receivable', u'Cash Management', u'General Ledger', u'Payroll', u'Project Accounting', u'"What If" Scenarios', u'Balance Sheet', u'Capital Asset Planning', u'Cash Management', u'Consolidation / Roll-Up', u'Forecasting', u'General Ledger', u'Income Statements', u'Multi-Company', u'Multi-Department / Project', u'Profit / Loss Statement', u'Project Budgeting', u'Run Rate Tracking', u'Version Control',u'"What If" Scenarios', u'Balance Sheet', u'Cash Management', u'Consolidation / Roll-Up', u'Forecasting', u'General Ledger', u'Income Statements', u'Profit / Loss Statement']

我希望这就是你想要的。现在遍历此列表,您是否有逻辑:)