没有使用python Request获取第二页数据

时间:2018-03-15 23:29:47

标签: python python-3.x web-scraping python-requests python-responses

我正试图从Fandango网站上获取电影评论。甚至当我点击特定电影的电影评论的第二页的URL时,我仍然获得第一页。我是否需要发送带有请求的cookie?

以下是我的代码段:

from bs4 import BeautifulSoup
from urllib.request import Request, urlopen

baseUrl = 'https://www.fandango.com/movie-reviews'
req = Request(baseUrl, headers={'User-Agent': 'Mozilla/5.0'})
webpage = urlopen(req).read()
soup = BeautifulSoup(webpage, 'html.parser')

# Getting all the movie links from the first page
movieLinks = soup.find_all("a", class_='dark')

# Get reviews for every movie
for i in range(2):#len(movieLinks)
    try:
        movieName = movieLinks[i].text.replace(' Review', '')
        count = 1
        print('\n\n****** ' + movieName + ' ********\n\n')
        # Getting movie reviews from first 10
        for j in range(3):
            pageNum = j + 1;
            movieReviewUrl = movieLinks[i]['href'] + '?pn=' + str(pageNum)
            print('Hitting URL: ' + movieReviewUrl)
            revReq = Request(movieReviewUrl, headers = {'User-Agent': 'Mozilla/5.0'})
            revWebpage = urlopen(revReq).read()
            revSoup = BeautifulSoup(revWebpage, 'html.parser')
            revArr = revSoup.find_all("p", class_ = "fan-reviews__item-content")
            for k in range(len(revArr)):
                if len(revArr[k])>0:
                    print(str(count) + ' : ' + revArr[k].text)
                    count = count + 1
    except:
        print('Error for movie: ' + movieName)

1 个答案:

答案 0 :(得分:0)

我建议使用Requests,使用它来处理此类请求要容易得多。

from bs4 import BeautifulSoup
import requests

baseUrl = 'https://www.fandango.com/movie-reviews'
# req = Request(baseUrl, headers={'User-Agent': 'Mozilla/5.0'})
webpage = requests.get(baseUrl).text
soup = BeautifulSoup(webpage, 'html.parser')

# Getting all the movie links from the first page
movieLinks = soup.find_all("a", class_='dark')

# Get reviews for every movie
for i in range(2):#len(movieLinks)
 try:
    movieName = movieLinks[i].text.replace(' Review', '')
    count = 1
    print('\n\n****** ' + movieName + ' ********\n\n')
    # Getting movie reviews from first 10
    for j in range(3):
        pageNum = j + 1;
        movieReviewUrl = movieLinks[i]['href'] + '?pn=' + str(pageNum)
        print('Hitting URL: ' + movieReviewUrl)
        # revReq = Request(movieReviewUrl, headers = {'User-Agent': 'Mozilla/5.0'})
        # revWebpage = urlopen(revReq).read()
        revWebpage = requests.get(movieReviewUrl).text
        revSoup = BeautifulSoup(revWebpage, 'html.parser')
        revArr = revSoup.find_all("p", class_ = "fan-reviews__item-content")
        print(len(revArr))
        for k in range(len(revArr)):
            if len(revArr[k])>0:
                print(str(count) + ' : ' + revArr[k].text)
                count = count + 1
except:
    print('Error for movie: ' + movieName)

当你运行它时,你可以看到revArr返回0,所以请检查" fan-reviews__item-content"。