cra没有urllib的情况下如何从请求中获取响应?

时间:2019-01-27 12:43:38

标签: scrapy

我相信,有一种更好的方法可以使用scrapy.Request来获得响应

...
import urllib.request
from scrapy.selector import Selector
from scrapy.http import HtmlResponse
...

class MatchResultsSpider(scrapy.Spider):
    name = 'match_results'
    allowed_domains = ['site.com']
    start_urls = ['url.com']

    def get_detail_page_data(self, detail_url):
        req = urllib.request.Request(
            detail_url,
            data=None,
            headers={
                'user_agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/68.0.3440.106 Safari/537.36',
                'accept': 'application/json, text/javascript, */*; q=0.01',
                'referer': 'site.com',
            }
        )

        page = urllib.request.urlopen(req)
        response = HtmlResponse(url=detail_url, body=page.read())
        target = Selector(response=response)
        return target.xpath('//dd[@data-first_name]/text()').extract_first()

我在解析函数中获得了所有信息。 但是在一个地方,我需要从详细信息页面内部获取一些和平数据。

# Lineups
lineup_team_tables = lineups_container.xpath('.//tbody')
for i, table in enumerate(lineup_team_tables):
    # lineup players
    line_up = []
    lineup_players = table.xpath('./tr[not(contains(string(), "Coach"))]')
    for lineup_player in lineup_players:
        line_up_entries = {}
        lineup_player_url = lineup_player.xpath('.//a/@href').extract_first()
        line_up_entries['player_id'] = get_id(lineup_player_url)
        line_up_entries['jersey_num'] = lineup_player.xpath('./td[@class="shirtnumber"]/text()').extract_first()

        abs_lineup_player_url = response.urljoin(lineup_player_url)
        line_up_entries['position_id_detail'] = self.get_detail_page_data(abs_lineup_player_url)

        line_up.append(line_up_entries)

    # team_lineup['line_up'] = line_up
    self.write_to_scuard(i, 'line_up', line_up)

我可以使用scrapy.Request(detail_url,calback_func)从其他页面获取数据吗?

感谢您的帮助!

1 个答案:

答案 0 :(得分:1)

过多的代码。使用Scrapy解析的简单方案:

class ********(scrapy.Spider):
    name = '*******'
    domain = '****'
    allowed_domains = ['****']
    start_urls = ['https://******']

    custom_settings = {
        'USER_AGENT': 'Mozilla/5.0 (Windows NT 10.0; Win64;AppleWebKit/537.36 (KHTML, like Gecko) Chrome/68.0.3440.84 Safari/537.36',
        'DEFAULT_REQUEST_HEADERS': {
        'ACCEPT': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8',
        'ACCEPT_ENCODING': 'gzip, deflate, br',
        'ACCEPT_LANGUAGE': 'en-US,en;q=0.9',
        'CONNECTION': 'keep-alive',
    }

    def parse(self, response):
      (You already have responsed html start_urls = ['https://******'])
      yield scrapy.Request(url, callback=self.parse_details)

然后可以进一步解析(嵌套)。然后返回解析回调:

    def parse_details(self, response):
       ************
       yield scrapy.Request(url_2, callback=self.parse)