如何通过BeautifulSoup进行爬网时获得页面的较深层次

时间:2018-11-06 20:50:54

标签: python beautifulsoup web-crawler

我试图使用爬网以从网站创建一个小的数据集。我使用BeautifulSoup来获取页面信息,并希望从该网站上的产品中获取一些数据。实际上,我没有将身体本身放入“汤”中,这使我无法获取主要数据。

我的代码:

def get_pages(max_pages):
    page = 1
    while page <= max_pages:
        url = 'https://www.kickstarter.com/discover/advanced?category_id=16&woe_id=0&sort=magic&seed=2569226&page=' + str(page)
        source_code = requests.get(url)
        text_page = source_code.text
        soup = BeautifulSoup(text_page, 'html.parser')
        for link in soup.findAll('a', {'class': 'soft-black mb3'}): 
            href = link.get('href')
            print(href)

        page += 1

get_pages(1)

我的问题是,如何获得更深层次的页面?

1 个答案:

答案 0 :(得分:0)

这似乎对我有用。.我在5页上很好地运行了它。

from bs4 import BeautifulSoup
import re
import requests

def get_pages(max_pages):
    headers = {'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.95 Safari/537.36'}
    page = 1                                                                                                                   
    while page <= max_pages:                                                                                                   
        url = 'https://www.kickstarter.com/discover/advanced?category_id=16&woe_id=0&sort=magic&seed=2569226&page=' + str(page)
        source_code = requests.get(url, headers=headers)                                                                       
        soup = BeautifulSoup(source_code.text, 'lxml')                                                                         
        classes = soup.findAll('div', class_='js-react-proj-card col-full col-sm-12-24 col-lg-8-24')                           
        urls = re.findall('"project":"https://www.kickstarter.com/.+\",', str(classes))                                        
        for url in urls:                                                                                                       
            each_page = requests.get(url.replace(',','').replace('"','').replace('project:',''), headers=headers)              
            soup = BeautifulSoup(source_code.text, 'lxml')
            #I don't know what your end goal is, but this was just printing the url of the page.                                                                      
            print(each_page.url)                                                                                               


        page += 1



Output = 


https://www.kickstarter.com/projects/albertgajsak/makerphone-an-educational-diy-mobile-phone
https://www.kickstarter.com/projects/meadow/meadow-full-stack-net-standard-iot-platform
https://www.kickstarter.com/projects/simonegiertz/the-every-day-calendar
https://www.kickstarter.com/projects/keyboardio/model-01-travel-case-quickstarter
https://www.kickstarter.com/projects/44621210/qdee-robot-kit-a-whole-new-world-of-play-to-micro
https://www.kickstarter.com/projects/whambamsystems/wham-bam-the-best-flexible-bed-for-3d-printers-ava
https://www.kickstarter.com/projects/ludenso/magimask-immersive-high-definition-augmented-reali
https://www.kickstarter.com/projects/805332783/tinyjuice-the-smallest-self-adhesive-true-wireless
https://www.kickstarter.com/projects/2099924322/nebula-capsule-ii-worlds-first-android-tvtm-pocket
https://www.kickstarter.com/projects/767329947/dockcase-adapter-turn-your-macbook-pro-charger-int
https://www.kickstarter.com/projects/petato/footloose-next-gen-automatic-and-health-tracking-c
https://www.kickstarter.com/projects/1289187249/fingertip-microscope-bring-a-800x-microscope-on-yo
https://www.kickstarter.com/projects/bentristem/the-web-app-revolution-making-the-best-coding-cour