如何抓取text / html中返回的ajax的特定内容?

时间:2019-07-05 19:38:11

标签: python html ajax web-scraping beautifulsoup

我能够获得ajax网址和响应。响应不是JSON,而是dev。工具说其内容类型为:text / html; charset = UTF-8

这是url

我的问题是,这是一大段文本,我想抓取/解析出非常具体的一段文本。我还在代码块中看到许多pythonic字典和列表。

我的目标是提取“ ASINList”部分:[.......],并最终将所有asins包含在该列表中。

我该怎么做?我正在使用beautifulsoup

我已经尝试过soup.find('script'),但是看着html我不知道该如何处理。

</div>
</div>
<script>
P.when("stores-widget-productgrid").execute( function (Widget)    
"prices":{"price":{"price":{"isSuppressedByMAP":false,"currency": text i 
 dont need"{"ASINList"['asin','asin','asin','asin'],"More text" I dont 
 need":{text I dont need}, more and more and more text I do not need
</script>
</div>
</body>

我希望有效地抓取该单个ajax url以提取asin列表,并将该列表放入单个列字典中以写入数据帧。最终的要求是1列“ ASIN”,每行一个asin

2 个答案:

答案 0 :(得分:1)

使用re模块足以提取页面上存在的JSON,那么您可以使用.content.ASINList对其进行索引 在这里,您去了:

import requests
import re
import json

if __name__ == "__main__":
    url = 'https://www.amazon.com/stores/slot/BBP_PRODUCT_GRID_18105981011?node=18105981011&slashargs=&productGridPageIndex=11&ingress=0&visitId=3d9f2885-f57e-42d6-a611-cc1c799c2b6b'
    res = requests.get(url)
    res.raise_for_status()
    html = res.text

    match = re.search(r"var config = (.*);\s+ReactDOM.render", html, flags=re.MULTILINE)

    raw_json = match.group(1)
    parsed = json.loads(raw_json)
    asin_list = parsed['content']['ASINList']
    print(asin_list)

输出:

['B0040OD2IO', 'B0040OID10', 'B00BZHD7PM', 'B00302N4P8', 'B00BHEN9FQ', 'B004C7XAC2', 'B00DG8U51W', 'B000TVJPZG', 'B001NDQ2M8', 'B005PDUM8C', 'B0040ODFK4', 'B0030CX39K', 'B00C00JYDW', 'B005PFW63O', 'B01LQSUPOU', 'B006VWSVW0', 'B00KADQJHU', 'B01N5EUCQ3', 'B00C1WAZFU', 'B07K1SJJBM', 'B079V5DMMN', 'B0040OIDE2', 'B001LKOQRG', 'B0054YLAA6', 'B0068QWJDQ', 'B00N3ISFOY', 'B0040OIDCY', 'B00AIRNWXC', 'B005PDTQK2', 'B00ICRN2XK', 'B0068QTY8E', 'B00C00JY1O', 'B001HEL67Y', 'B00OD4AKYU', 'B005PFW4E0', 'B00N3IO69C', 'B01LWXIA74', 'B00BXV05A0', 'B005ZVZVDA', 'B07NZ1XNX5', 'B00N3IVIIE', 'B07GY3RG3V', 'B07PGWP3ZZ', 'B01LMOABHY', 'B00NHY56DM', 'B006ZMZCUA', 'B0040OJ756', 'B004LPJOIY', 'B0040OAV4C', 'B005PDUHLY', 'B005V1ZJ3Q', 
'B003VFX9KE', 'B00DG8SVP4', 'B07PSLKLTZ', 'B008Y057YC', 'B00427LWQ8', 'B005V4SPSO', 'B003TXKKN2', 'B00R6B6UOG', 'B07P8881RV', 'B07Q1K5GSR', 'B009M6Z1IO', 'B07FYSJYGJ', 'B01LQSZCDO', 'B07JFMCXKS', 'B0054Z6FN2', 'B07NJ7YP99', 'B00N3ITYFI', 'B002LPQ1SM', 'B00N3J1D92', 'B0040OHXO8', 'B01CDE1EX2', 'B07GY2W6QN', 'B0040OG132', 'B07JL2K3T3', 'B00N3IRD60', 'B071GF6MD8', 'B0040OF86S', 'B00N3IOE1W', 'B07G3HBZMY', 'B07P7KS33L', 'B07HSMZT9H', 'B00C05MA5Q', 'B00N3IO7SW', 'B00N3IWUDG', 'B00N3IRHFM', 'B0040OJGXO', 'B00K0JSZ58', 'B0040OOW24', 'B07HBDM5YC', 'B0040OIX8I', 'B01LYLBT4A', 'B0040OI5YK', 'B01LXKZLBN', 'B0040OI5QS', 'B00C0YYKQO', 'B0068QTIZI', 'B005PDUU20', 'B07JL2JZF6', 'B0040OQR40', 'B005PDUGUQ', 'B0040OHVLI', 'B00N3IPDUI', 'B0040OIU86', 'B005PFX32C', 'B005LEP3FC', 'B00N3IYIRC', 'B0040OQTIO', 'B003TXMT36', 'B0040OQY70', 'B004SLK8RW', 'B0040OOUJE', 'B005PFVCCK', 'B005PDUG06', 'B0040OBH1S', 'B07MH66SGN', 'B01IQ9DRCE', 'B00CBYFMCO', 'B005PDUU48', 'B0040OH1L8', 'B005WZ4VPS', 'B005PDUTWG', 'B005PFT8VW', 'B005PDUB74', 'B07DNJGTHL', 'B00N3INYK4', 'B005PDUJ9Y', 'B005PDUIZ4', 'B005PFVZYU', 'B07NYTWY1D', 'B0040OKI9U', 'B004SL71I6', 'B0040OJHVA', 'B001FO21TS', 'B0040O9EDG', 'B01M4RLIO5', 'B0015TCPFS', 'B005PDUCVE', 'B005MRRYZ0', 'B00DG8WM2C', 'B005HZZ0QM', 'B005PFTTIY', 'B002ZOCHJ6', 'B005PFW5SU', 'B0040ODLVW', 'B0040OH7XK']

答案 1 :(得分:0)

您正尝试从<script>标记中抓取数据,BeautifulSoup不会帮您太多。但是您可以使用reast.literal_eval来解析值:

import requests
import re
from ast import literal_eval

url = 'https://www.amazon.com/stores/slot/BBP_PRODUCT_GRID_18105981011?node=18105981011&slashargs=&productGridPageIndex=11&ingress=0&visitId=3d9f2885-f57e-42d6-a611-cc1c799c2b6b'
txt = requests.get(url).text

d = literal_eval('{' + re.findall(r'"ASINList":\[".*?"\]', txt)[0] + '}')

for v in d['ASINList']:
    print(v)

打印:

B0040OD2IO
B0040OID10
B00BZHD7PM
B00302N4P8
B00BHEN9FQ
B004C7XAC2
B00DG8U51W
B000TVJPZG
B001NDQ2M8
B005PDUM8C
B0040ODFK4
B0030CX39K
B00C00JYDW
B005PFW63O
B01LQSUPOU
B006VWSVW0
B00KADQJHU
B01N5EUCQ3
B00C1WAZFU

...and so on.