使用Python BeautifulSoup查找页数

时间:2018-02-28 17:18:39

标签: python web-scraping beautifulsoup

我想从Steam页面中提取总页码(本例中为11)。我相信以下代码应该工作(返回11),但它返回一个空列表。就好像它没有找到paged_items_paging_pagelink类。

import requests
import re
from bs4 import BeautifulSoup
r = requests.get('http://store.steampowered.com/tags/en-us/RPG/')
c = r.content
soup = BeautifulSoup(c, 'html.parser')


total_pages = soup.find_all("span",{"class":"paged_items_paging_pagelink"})[-1].text

2 个答案:

答案 0 :(得分:2)

不使用BeautifulSoup的另一种更快捷方式:

import requests

url = "http://store.steampowered.com/contenthub/querypaginated/tags/NewReleases/render/?query=&start=20&count=20&cc=US&l=english&no_violence=0&no_sex=0&v=4&tag=RPG" # This returns your query in json format
r = requests.get(url)

print(round(r.json()['total_count'] / 20)) # total_count = number of records, 20 = number of pages shown
  

11

答案 1 :(得分:1)

如果您检查页面来源,则无法获得所需的内容。这意味着它是通过Javascript动态生成的。

页码位于<span id="NewReleases_links">标记内,但在页面源中,HTML仅显示此内容:

<span id="NewReleases_links"></span>

处理此问题的最简单方法是使用Selenium

但是,如果您查看页面源代码,则可以使用文本Showing 1-20 of 213 results 。所以,你可以抓住这个并计算页数。

必填HTML:

<div class="paged_items_paging_summary ellipsis">
    Showing 
    <span id="NewReleases_start">1</span>
    -
    <span id="NewReleases_end">20</span> 
    of 
    <span id="NewReleases_total">213</span> 
    results         
</div>

代码:

import requests
from bs4 import BeautifulSoup

r = requests.get('http://store.steampowered.com/tags/en-us/RPG/')
soup = BeautifulSoup(r.text, 'lxml')

def get_pages_no(soup):
    total_items = int(soup.find('span', id='NewReleases_total').text)
    items_per_page = int(soup.find('span', id='NewReleases_end').text)
    return round(total_items/items_per_page)

print(get_pages_no(soup))
# prints 11

(注意:我仍然建议使用Selenium,因为这个站点的大部分内容都是动态生成的。刮掉所有这些数据会很痛苦。)