使用BeautifulSoup抓取亚马逊

时间:2018-11-20 00:10:50

标签: python web-scraping beautifulsoup findall

我正在尝试对Amazon的评论进行网络爬虫:https://www.amazon.com/Python-Crash-Course-Hands-Project-Based/dp/1593276036/ref=sr_1_3?ie=UTF8&qid=1541450645&sr=8-3&keywords=python

这是我的代码:

import requests as req
from bs4 import BeautifulSoup
headers = {'User-Agent': 'Kevin\'s_request'}
r = req.get('https://www.amazon.com/Python-Crash-Course-Hands-Project-Based/dp/1593276036/ref=sr_1_3?ie=UTF8&qid=1541450645&sr=8-3&keywords=python', headers=headers)
soup = BeautifulSoup(r.text, "html.parser")
soup.find(class_="a-expander-content a-expander-partial-collapse-content")

我只得到一个空列表。我在Jupyter Notebooks和BS 4中使用Python 3.6.4

2 个答案:

答案 0 :(得分:1)

尝试这种方法。事实证明您的选择器找不到任何东西。但是,我已将其修复以达到目的:

import requests
from bs4 import BeautifulSoup

def get_reviews(s,url):
    s.headers['User-Agent'] = 'Mozilla/5.0'
    response = s.get(url)
    soup = BeautifulSoup(response.text,"lxml")
    return soup.find_all("div",{"data-hook":"review-collapsed"})

if __name__ == '__main__':
    link = 'https://www.amazon.com/Python-Crash-Course-Hands-Project-Based/dp/1593276036/ref=sr_1_3?ie=UTF8&qid=1541450645&sr=8-3&keywords=python'    
    with requests.Session() as s:
        for review in get_reviews(s,link):
            print(f'{review.text}\n')

答案 1 :(得分:0)

不确定您身边正在发生什么,但是此代码可以正常工作。 到此为止(python 3.6,BSP 4.6.3):

import requests
from bs4 import BeautifulSoup

def s_comments(url):
    headers = {'User-Agent': 'Bob\'s_request'}
    response = requests.get(url, headers=headers )
    if response.status_code != 200:
        raise ConnectionError

    soup = BeautifulSoup(response.content)
    return soup.find_all(class_="a-expander-content a-expander-partial- collapse-content")


url = 'https://www.amazon.com/dp/1593276036'    
reviews = s_comments(url)
for i, review in enumerate(reviews):
    print('---- {} ----'.format(i))
    print(review.text)