BeautifulSoup蒸汽市场网刮错误

时间:2018-01-28 20:25:57

标签: python web-scraping beautifulsoup anaconda

我试图使用,python和BeautifulSoup4编写程序,查看特定游戏的蒸汽市场首页(在本例中为Rust)并查看每个项目并获取其名称和价格。到目前为止,我已经设法让这个工作在第一页(因为每个页面只显示10个项目,但是当我更改第二个页面的网址时,我得到第一页的完全相同的输出。

我在第一页使用的网址是:https://steamcommunity.com/market/search?appid=252490#p1_popular_desc

第二页是:https://steamcommunity.com/market/search?appid=252490#p2_popular_desc

代码是:

import bs4 as bs
import urllib.request

for web_page in range(1,3):
    print('webpage number is: '+ str(web_page))
    if web_page == 1:
        url = "https://steamcommunity.com/market/search?appid=252490#p1_popular_desc"
        print(url)
        sauce = urllib.request.urlopen(url).read()
        soup = bs.BeautifulSoup(sauce,'lxml')


    if web_page == 2:
        urlADD = '#p2_popular_desc'
        url ="https://steamcommunity.com/market/search?appid=252490#p2_popular_desc"
        print(url)

        sauce = urllib.request.urlopen(url).read()
        soup = bs.BeautifulSoup(sauce,'lxml')



    for div in soup.find_all('a',class_='market_listing_row_link'):
        span = div.find('span',class_='normal_price')
        span2 = div.find('span',class_='market_listing_item_name')
        print(span2.text)
        print(span.text)

我不确定这里的错误是否会受到欢迎。

1 个答案:

答案 0 :(得分:1)

试试这个: 你需要为firefox安装selenium和geckodriver 你需要这个pypi.python.org/pypi/selenium(快乐脚本:>)

#Mossein~King(1m here to help)
import time
import selenium
import selenium.webdriver as webdriver
from BeautifulSoup import BeautifulSoup

#for.testing.purposes.only
driver = webdriver.Firefox()

url = ''
driver.get(url)

#pages you like to interact with
pages = 2
for x in xrange(pages):
    pagesource = driver.page_source
    soup = BeautifulSoup(pagesource)
    #do your stuff

    #go to next page
    #example if next button is <a class='MosseinKing Is Awesome'>
    driver.find_element_by_xpath("//span[@class='MosseinKing Is Awesome']").click()
    #wait for 2 seconds for page to load
    time.sleep(2)