我无法浏览页面Beautifulsoup

时间:2019-06-02 00:17:07

标签: python html web-scraping beautifulsoup

我是网络抓取的初学者,我正在按照本教程(https://www.dataquest.io/blog/web-scraping-beautifulsoup/)从此链接(https://www.imdb.com/search/title?release_date=2016-01-01,2019-05-01)中提取电影数据,我选择为2016年至2019年之间提取电影测试。我只有25行,但我希望超过30000行。 您认为有可能吗?

这是代码:

from requests import get
from bs4 import BeautifulSoup
import csv
import pandas as pd
from time import sleep
from random import randint
from time import time
from IPython.core.display import clear_output

headers = {"Accept-Language": "en-US, en;q=0.5"}

pages = [str(i) for i in range(1,5)]
years_url = [str(i) for i in range(2000,2018)]

url = 'https://www.imdb.com/search/title?release_date=2016-01-01,2019-05-01'
response = get(url)
html_soup = BeautifulSoup(response.text, 'html.parser')
type(html_soup)

movie_containers = html_soup.find_all('div', class_ = 'lister-item mode-advanced')

names = []
years = []
imdb_ratings = []
metascores = []
votes = []
start_time = time()
requests = 0

for year_url in years_url:
# For every page in the interval 1-4
   for page in pages:
# Make a get request
      response = get('http://www.imdb.com/search/title?release_date=' + year_url +'&sort=num_votes,desc&page=' + page, headers = headers)
# Pause the loop
      sleep(randint(8,15))
# Monitor the requests
      requests += 1
      elapsed_time = time() - start_time
 print('Request:{}; Frequency: {} requests/s'.format(requests, requests/elapsed_time))
clear_output(wait = True)
# Throw a warning for non-200 status codes
if response.status_code != 200:
  warn('Request: {}; Status code: {}'.format(requests, response.status_code))
# Break the loop if the number of requests is greater than expected
  if requests > 72:
    warn('Number of requests was greater than expected.')

# Parse the content of the request with BeautifulSoup
page_html = BeautifulSoup(response.text, 'html.parser')
# Select all the 50 movie containers from a single page
mv_containers = page_html.find_all('div', class_ = 'lister-item mode-advanced')

# Extract data from individual movie container
for container in movie_containers:
# If the movie has Metascore, then extract:
  if container.find('div', class_ = 'ratings-metascore') is not None:
# The name
   name = container.h3.a.text
   names.append(name)
# The year
   year = container.h3.find('span', class_ = 'lister-item-year').text
   years.append(year)
# The IMDB rating
   imdb = float(container.strong.text)
   imdb_ratings.append(imdb)
# The Metascore
   m_score = container.find('span', class_ = 'metascore').text
   metascores.append(int(m_score))
# The number of votes
   vote = container.find('span', attrs = {'name':'nv'})['data-value']
   votes.append(int(vote))


   movie_ratings = pd.DataFrame({'movie': names,
  'year': years,
  'imdb': imdb_ratings,
  'metascore': metascores,
  'votes': votes
  })

#data cleansing
movie_ratings = movie_ratings[['movie', 'year', 'imdb', 'metascore', 'votes']]
movie_ratings.head()
movie_ratings['year'].unique()
movie_ratings.to_csv('movie_ratings.csv')

2 个答案:

答案 0 :(得分:1)

首先要仔细检查您的缩进(实际上-顽皮的调皮-在该教程中是错误的。我猜它在发布后没有得到正确的证明,并且错误地反复对齐了代码)。

为了说明,您目前有类似的东西(显示了减少的代码行)

for year_url in years_url:
    for page in pages:
        response = get('http://www.imdb.com/search/title?release_date=' + year_url +'&sort=num_votes,desc&page=' + page, headers = headers)

page_html = BeautifulSoup(response.text, 'html.parser')

您的缩进意味着,如果代码完全运行,则您仅使用要在实际html解析方面访问的最后一个url。

应该是:

for year_url in years_url:
    for page in pages:
        response = get('http://www.imdb.com/search/title?release_date=' + year_url +'&sort=num_votes,desc&page=' + page, headers = headers)
        page_html = BeautifulSoup(response.text, 'html.parser')

缩进在python中具有含义

https://docs.python.org/3/reference/lexical_analysis.html?highlight=indentation

  

逻辑开头的空白(空格和制表符)   line用于计算线的压痕级别,其中   turn用于确定语句的分组。

答案 1 :(得分:0)

由于缺少功能,很难确切说明问题出在哪里,但是从我的角度来看,您需要分别解析每个页面。

每个请求之后,您需要解析文本。但是,我怀疑主要问题是代码的顺序,建议使用函数。