无法解析网页中的其他产品链接

时间:2019-11-05 17:18:53

标签: python python-3.x web-scraping beautifulsoup python-requests

我已经用Python创建了一个脚本来从网页中获取不同的产品链接。尽管我知道该站点的内容是动态的,但是我尝试了传统的方式来让您知道我尝试过的内容。我在开发工具中寻找API,但找不到一个。没办法通过请求获得那些链接吗?

Site Link

到目前为止,我已经写过:

import requests
from bs4 import BeautifulSoup

link = "https://www.amazon.com/stores/node/10699640011"

def fetch_product_links(url):
    res = requests.get(url,headers={"User-Agent":"Mozilla/5.0"})
    soup = BeautifulSoup(res.text,"lxml")
    for item_link in soup.select("[id^='ProductGrid-'] li[class^='style__itemOuter__'] > a"):
        print(item_link.get("href"))

if __name__ == '__main__':
    fetch_product_links(link)

如何使用请求从该站点获取不同的产品链接?

1 个答案:

答案 0 :(得分:4)

我认为您只需要从网络标签中可以看到的其他网址结构中收集的asins,即可大大缩短最终网址。但是,您确实需要向原始网址提出请求,以选择要在第二个网址中使用的标识符。返回146个链接。

import requests, re, json

node = '10699640011'

with requests.Session() as s:
    r = s.get(f'https://www.amazon.com/stores/node/{node}')
    p = re.compile(r'var slotsStr = "\[(.*?,){3} share\]";')
    identifier = p.findall(r.text)[0]
    identifier = identifier.strip()[:-1]
    r = s.get(f'https://www.amazon.com/stores/slot/{identifier}?node={node}')
    p = re.compile(r'var config = (.*?);')
    data = json.loads(p.findall(r.text)[0])
    asins = data['content']['ASINList']
    links = [f'https://www.amazon.com/dp/{asin}' for asin in asins]
    print(links)

编辑:

有两个给定的节点:

import requests, re, json
from bs4 import BeautifulSoup as bs

nodes = ['3039806011','10699640011']

with requests.Session() as s:
    for node in nodes:
        r = s.get(f'https://www.amazon.com/stores/node/{node}')
        soup = bs(r.content, 'lxml')
        identifier = soup.select('.stores-widget-btf:not([id=share],[id*=RECOMMENDATION])')[-1]['id']
        r = s.get(f'https://www.amazon.com/stores/slot/{identifier}?node={node}')
        p = re.compile(r'var config = (.*?);')
        data = json.loads(p.findall(r.text)[0])
        asins = data['content']['ASINList']
        links = [f'https://www.amazon.com/dp/{asin}' for asin in asins]
        print(links)