如何使用python发出xhr post请求

时间:2017-07-16 11:24:59

标签: python web-scraping xmlhttprequest

所以,我试图废弃一个网站,需要一个帖子请求来检索数据,但我没有运气..我的最后一次尝试是这样的:         来自请求导入会话         来自bs4 import BeautifulSoup

    # HEAD requests ask for *just* the headers, which is all you need to grab the
    # session cookie
    session = Session()

    # HEAD requests ask for *just* the headers, which is all you need to grab the
    # session cookie
    session.head('http://www.betrebels.gr/sports')

    response = session.post(
        #url = "https://sports-       itainment.biahosted.com/WebServices/SportEvents.asmx/GetEvents",
        url='http://www.betrebels.gr/sports',
        data={
                'champIds':         '["1191783","1191784","1191785","939911","939912","939913","939914","175","190686","198881","542378","217750","91","201","2","38","201614","454","63077","60920","384","49251","61873","87095","110401","111033","122008","122019","342","343","344","430",213","95","10","1240912","1237673","1239055","339","340","124","1381","260549","1071542","437","271","510","1241462","72","277","137","308","488","2131","59178","433","434","347","203","348","349","92420","148716","322","184","127983","321","88173","417","418","284","2688","103419","618","487","56029","214640","215229","514","92","302","1084811","1084813","1084831","68739","81852","406","100","70","172","351","541730","541732","541733","548965","552442","554615","554616","554617","361","136","519","279","65","319","364","75","220","194676","149","121443","110902","171694","152501","568313","126998","758","740","1264928"]',
                'dateFilter':'All',
                'eventIds':'[]',
                'marketsId':'-1',
                'skinId':"betrebels"
            },

        headers={'Accept':'application/json, text/javascript, */*; q=0.01',
            'Accept-Encoding':'gzip, deflate, br',
            'Accept-Language':'el-GR,el;q=0.8',
            'Connection':'keep-alive',
            'Content-Length':'701',
            'Content-Type':'application/json; charset=UTF-8',
            'Cookie':'Language=el-gr;         ASP.NET_SessionId=kp0b2xwf2vzuci4uwn33uh1o; IsBetApp=False; _ga=GA1.2.1005994943.1499255280; _gid=GA1.2.1197736989.1500201903; _gat=1; ParentUrl=ParentUrl is not need',
            'DNT':'1',
            'Host':'sports-itainment.biahosted.com',
            'Origin':'https://sports-itainment.biahosted.com',
            'Referer':'https://sports-itainment.biahosted.com/generic/prelive.aspx?token=&clientTimeZoneOffset=-180&lang=el-gr&walletcode=508729&skinid=betrebels&parentUrl=https%3A//ps.equalsystem.com/ps/game/BIASportbook.action',
            'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/59.0.3071.115 Safari/537.36',
            'X-Requested-With':'XMLHttpRequest'         
            }
        )

    print response.text




    soup= BeautifulSoup(response.content, "html.parser")

    #leagues= soup.find_all("div",{"class": "header"})[0].text
    #print leagues
    leagues= soup.find_all("div", {"class": "championship-header"})
    links= soup.find_all("a")

    for link in links:
        print (link.get("href"), link.text)

    for item in leagues:
        #print item.contents[0].find_all("div",{"class": "header"})[0].text
            print item.find_all("div",{"class": "header"})[0].text
        print item.find_all("div",{"class": "header"})[0].text
        print item.find_all("span")[0].text

我想从betrebels.com废弃所有的足球联赛吗?

1 个答案:

答案 0 :(得分:0)

因此,实际数据更清晰,更容易从真实来源获取 - 如果您仔细阅读浏览器的请求,您可以看到 - 但这里是URL:https://s5.sir.sportradar.com/betinaction/en/1

它本身也在json underneathe,这意味着你可以减少它只使用requests模块和json模块,如果你需要它,但是请求允许你只返回你解析的原始json作为字典。

所有这些意味着你可以从根本上简化抓取过程以获得你想要的东西。

您可以在此处找到所有国家/地区的所有联盟https://ls.sportradar.com/ls/feeds/?/betinaction/en/Europe:Berlin/gismo/config_tree/41/0/1您只需抓取所有_id字段,然后使用格式为{{1 }} + https://s5.sir.sportradar.com/betinaction/en/1/category/

但是如果你检查了请求,你也应该获取原始网址...

我把剩下的留给你 - 但你想要的一切都在那里,而且更容易阅读和访问