网页搜集-麦肯锡文章

时间:2019-02-13 16:03:26

标签: python web web-scraping scrapy

我正在寻找文章标题。我不知道如何提取标题文本。您能否看下面我的代码并提出解决方案。

我是新手。感谢您的帮助!

网页的Web开发人员视图的屏幕截图 https://imgur.com/a/O1lLquY

import scrapy



class BrickSetSpider(scrapy.Spider):
    name = "brickset_spider"
    start_urls = ['https://www.mckinsey.com/search?q=Agile&start=1']

    def parse(self, response):
        for quote in response.css('div.text-wrapper'):
            item = {
                'text': quote.css('h3.headline::text').extract(),
            }
            print(item)
            yield item

2 个答案:

答案 0 :(得分:4)

对新手开发者来说不错!我只更改了parse函数中的选择器:

for quote in response.css('div.block-list div.item'):
    yield {
        'text': quote.css('h3.headline::text').get(),
    }

UPD:嗯,看来您的网站提出了其他数据请求。

打开开发人员工具,并使用参数https://www.mckinsey.com/services/ContentAPI/SearchAPI.svc/search检查对{"q":"Agile","page":1,"app":"","sort":"default","ignoreSpellSuggestion":false}的请求。 您可以使用这些参数和适当的标头制作scrapy.Request,并使用数据获取json。 json lib可以轻松解析它。

UPD2:正如我从卷曲curl 'https://www.mckinsey.com/services/ContentAPI/SearchAPI.svc/search' -H 'content-type: application/json' --data-binary '{"q":"Agile","page”:1,”app":"","sort":"default","ignoreSpellSuggestion":false}' --compressed中看到的那样,我们需要以这种方式发出请求:

from scrapy import Request
import json

data = {"q": "Agile", "page": 1, "app": "", "sort": "default", "ignoreSpellSuggestion": False}
headers = {"content-type": "application/json"}
url = "https://www.mckinsey.com/services/ContentAPI/SearchAPI.svc/search"
yield Request(url, headers=headers, body=json.dumps(data), callback=self.parse_api)

,然后在parse_api函数中解析响应:

def parse_api(self, response):
    data = json.loads(response.body)
    # and then extract what you need

因此,您可以在请求中迭代参数page并获取所有页面。

UPD3:有效解决方案:

from scrapy import Spider, Request
import json


class BrickSetSpider(Spider):
    name = "brickset_spider"

    data = {"q": "Agile", "page": 1, "app": "", "sort": "default", "ignoreSpellSuggestion": False}
    headers = {"content-type": "application/json"}
    url = "https://www.mckinsey.com/services/ContentAPI/SearchAPI.svc/search"

    def start_requests(self):
        yield Request(self.url, headers=self.headers, method='POST',
                  body=json.dumps(self.data), meta={'page': 1})

    def parse(self, response):
        data = json.loads(response.body)
        results = data.get('data', {}).get('results')
        if not results:
            return

        for row in results:
            yield {'title': row.get('title')}

        page = response.meta['page'] + 1
        self.data['page'] = page
        yield Request(self.url, headers=self.headers, method='POST', body=json.dumps(self.data), meta={'page': page})

答案 1 :(得分:0)

如果您只想选择h1标签的文本,则只需

[tag.css('::text').extract_first(default='') for tag in response.css('.attr')]

这正在使用xpath,可能会更容易。

 //h1[@class='state']/text()

此外,我建议您检出BeautifulSoup for python。阅读页面的整个html并提取文本非常简单有效。 https://www.crummy.com/software/BeautifulSoup/bs4/doc/

一个非常简单的例子就是这样。

from bs4 import BeautifulSoup

text = '''
<td><a href="http://www.fakewebsite.com">Please can you strip me?</a>
<br/><a href="http://www.fakewebsite.com">I am waiting....</a>
</td>
'''
soup = BeautifulSoup(text)

print(soup.get_text())