使用Regex / List获取正确的数据

时间:2014-09-21 01:05:12

标签: javascript python regex web-scraping scrapy

我正在使用正则表达式解析以下代码(我不知道,但这是另一天的故事):

data:{
            url: 'stage-team-stat'
        },
        defaultParams: {
            stageId : 9155,
            field: 2,
            teamId: 26
        }
    };

使用以下代码解析它(其中var是上面的代码):

import re

    stagematch = re.compile("data:\s*{\s*url:\s*'stage-team-stat'\s*},\s*defaultParams:\s*{\s*(.*?),.*},",re.S)

    stagematch2 = re.search(stagematch, var)

        if stagematch2 is not None:
            stagematch3 = stagematch2.group(1)

            stageid = int(stagematch3.split(':', 1)[1])
            stageid = str(stageid)

            teamid = int(stagematch3.split(':', 3)[1])
            teamid = str(teamid)

            print stageid
            print teamid

在这个示例中,我希望stageid为'9155',teamid为'32',但它们都会以“9155”的形式返回。

谁能看到我做错了什么?

由于

1 个答案:

答案 0 :(得分:4)

另一种解决方案是不要深入研究正则表达式,而是使用javascript代码解析器解析javascript代码。使用slimit的示例:

  

SlimIt是一个用Python编写的JavaScript缩小器。它汇编   将JavaScript转换为更紧凑的代码,以便下载和运行   更快。

     

SlimIt还提供了一个包含JavaScript解析器的库,   lexer,漂亮的打印机和树访客。

from slimit import ast
from slimit.parser import Parser
from slimit.visitors import nodevisitor

data = """
var defaultTeamStatsConfigParams = {
        data:{
            url: 'stage-team-stat'
        },
        defaultParams: {
            stageId : 9155,
            field: 2,
            teamId: 32
        }
    };

    DataStore.prime('stage-team-stat', defaultTeamStatsConfigParams.defaultParams, [{"RegionId":252,"RegionCode":"gb-eng","TournamentName":"Premier League","TournamentId":2,"StageId":9155,"Field":{"Value":2,"DisplayName":"Overall"},"TeamName":"Manchester United","TeamId":32,"GamesPlayed":4,"Goals":6,"Yellow":7,"Red":0,"TotalPasses":2480,"Possession":247,"AccuratePasses":2167,"AerialWon":61,"AerialLost":49,"Rating":7.01,"DefensiveRating":7.01,"OffensiveRating":6.79,"ShotsConcededIBox":13,"ShotsConcededOBox":21,"TotalTackle":75,"Interceptions":71,"Fouls":54,"WasFouled":46,"TotalShots":49,"ShotsBlocked":9,"ShotsOnTarget":19,"Dribbles":44,"Offsides":3,"Corners":17,"Throws":73,"Dispossesed":36,"TotalClearance":78,"Turnover":0,"Ranking":0}]);

    var stageStatsConfig = {
        id: 'team-stage-stats',
        singular: true,
        filter: {
                instanceType: WS.Filter,
                id: 'team-stage-stats-filter',
                categories: { data: [{ value: 'field' }] },
                singular: true
        },
        params: defaultTeamStatsConfigParams,
        content: {
            instanceType: TeamStageStats,
            view: {
                renderTo: 'team-stage-stats-content'
            }
        }
    };

    var stageStats = new WS.Panel(stageStatsConfig);
    stageStats.load();
"""

parser = Parser()
tree = parser.parse(data)
fields = {getattr(node.left, 'value', ''): getattr(node.right, 'value', '')
          for node in nodevisitor.visit(tree)
          if isinstance(node, ast.Assign)}

print fields['stageId'], fields['field'], fields['teamId']

打印9155 2 32

这里我们迭代语法树节点并从所有赋值构建字典。其中我们有stageIdfieldsteamId


以下是将解决方案应用于scrapy蜘蛛的方法:

from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.selector import Selector

from slimit import ast
from slimit.parser import Parser
from slimit.visitors import nodevisitor


def get_fields(data):
    parser = Parser()
    tree = parser.parse(data)
    return {getattr(node.left, 'value', ''): getattr(node.right, 'value', '')
            for node in nodevisitor.visit(tree)
            if isinstance(node, ast.Assign)}


class ExampleSpider(CrawlSpider):
    name = "goal2"
    allowed_domains = ["whoscored.com"]
    start_urls = ["http://www.whoscored.com/Teams/32/Statistics/England-Manchester-United"]
    download_delay = 5

    rules = [Rule(SgmlLinkExtractor(allow=('http://www.whoscored.com/Teams/32/Statistics/England-Manchester-United'),deny=('/News', '/Graphics', '/Articles', '/Live', '/Matches', '/Explanations', '/Glossary', 'ContactUs', 'TermsOfUse', 'Jobs', 'AboutUs', 'RSS'),), follow=False, callback='parse_item')]

    def parse_item(self, response):
        sel = Selector(response)
        titles = sel.xpath("normalize-space(//title)")
        myheader = titles.extract()[0]

        script = sel.xpath('//div[@id="team-stage-stats"]/following-sibling::script/text()').extract()[0]
        script_fields = get_fields(script)
        print script_fields['stageId'], script_fields['field'], script_fields['teamId']