如何使用BeautifulSoup从页面抓取

时间:2019-06-13 08:12:52

标签: python beautifulsoup screen-scraping scrape

问的问题很简单,但是对我来说,它不起作用,我也不知道!

我想用BeautifulSoup从此页面https://www.brewersfriend.com/homebrew/recipe/view/16367/southern-tier-pumking-clone刮下评级啤酒,但它不起作用。

这是我的代码:

import requests
import bs4
from bs4 import BeautifulSoup



url = 'https://www.brewersfriend.com/homebrew/recipe/view/16367/southern-tier-pumking-clone'

test_html = requests.get(url).text

soup = BeautifulSoup(test_html, "lxml")

rating = soup.findAll("span", class_="ratingValue")

rating

完成后,它不起作用,但是如果我在另一个页面上执行相同的操作,则可以工作...我不知道。有人可以帮我吗?评分的结果是4.58

谢谢大家!

5 个答案:

答案 0 :(得分:2)

如果打印test_html,则会发现您收到403禁止响应。

您应该在GET请求中添加标头(至少是一个用户代理:)。

import requests
from bs4 import BeautifulSoup


headers = {
    'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.108 Safari/537.36'
}

url = 'https://www.brewersfriend.com/homebrew/recipe/view/16367/southern-tier-pumking-clone'

test_html = requests.get(url, headers=headers).text

soup = BeautifulSoup(test_html, 'html5lib')

rating = soup.find('span', {'itemprop': 'ratingValue'})

print(rating.text)

# 4.58

答案 1 :(得分:0)

获取禁止状态码(HTTP错误403)的原因,这意味着尽管理解了响应,服务器仍无法满足您的请求。如果您尝试刮擦许多更流行的网站,这些网站将具有防止机器人攻击的安全功能,则肯定会出现此错误。因此,您需要掩盖您的请求!

  1. 为此,您需要使用 Headers
  2. 此外,您还需要更正您要获取其数据的标签属性,即itemprop
  3. 使用 lxml 作为树构建器,或您选择的任何其他构建器

    import requests
    from bs4 import BeautifulSoup
    
    
    url = 'https://www.brewersfriend.com/homebrew/recipe/view/16367/southern-tier-pumking-clone'
    
    # Add this 
    headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.86 Safari/537.36'}
    
    test_html = requests.get(url, headers=headers).text      
    
    soup = BeautifulSoup(test_html, 'lxml')
    
    rating = soup.find('span', {'itemprop':'ratingValue'})
    
    print(rating.text)
    

答案 2 :(得分:0)

您请求的页面禁止为403,因此您可能不会收到错误消息,但它将为您提供空白的结果[]。为了避免这种情况,我们添加了用户代理,此代码将为您提供所需的结果。

import urllib.request
from bs4 import BeautifulSoup

user_agent = 'Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.0.7) Gecko/2009021910 Firefox/3.0.7'

url = "https://www.brewersfriend.com/homebrew/recipe/view/16367/southern-tier-pumking-clone"
headers={'User-Agent':user_agent} 

request=urllib.request.Request(url,None,headers) #The assembled request
response = urllib.request.urlopen(request)
soup = BeautifulSoup(response, "lxml")

rating = soup.find('span', {'itemprop':'ratingValue'})

rating.text

答案 3 :(得分:0)

    import requests
    from bs4 import BeautifulSoup


    headers = {
   'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) 
 AppleWebKit/537.36 
   (KHTML, like Gecko) Chrome/74.0.3729.108 Safari/537.36'
   }

 url = 'https://www.brewersfriend.com/homebrew/recipe/view/16367/southerntier-pumking 
clone'

test_html = requests.get(url, headers=headers).text

soup = BeautifulSoup(test_html, 'html5lib')

rating = soup.find('span', {'itemprop': 'ratingValue'})

 print(rating.text)

答案 4 :(得分:-1)

您正遇到此错误,因为某些网站无法被漂亮的汤刮掉。因此,对于此类网站,您必须使用硒 enter image description here

  • 根据您的操作系统从此link下载最新的chrome驱动程序
  • 通过此命令“ pip install selenium”安装selenium驱动程序
# import required modules 
import selenium
from selenium import webdriver
from bs4 import BeautifulSoup
import time, os

curren_dir  = os.getcwd()
print(curren_dir)

# concatinate web driver with your current dir && if you are using window change "/" to '\' .

# make sure , you placed chromedriver in current directory 
driver = webdriver.Chrome(curren_dir+'/chromedriver')
# driver.get open url on your browser 
driver.get('https://www.brewersfriend.com/homebrew/recipe/view/16367/southern-tier-pumking-clone')
time.sleep(1)

# it fetch data html data from driver
super_html = driver.page_source

# now convert raw data with 'html.parser'

soup=BeautifulSoup(super_html,"html.parser")
rating = soup.findAll("span",itemprop="ratingValue")
rating[0].text
相关问题