使用python从div抓取h3

时间:2019-05-11 10:52:52

标签: python html web-scraping beautifulsoup scrape

我想使用Python 3.6从DIV中刮取H3标题-从页面上

https://player.bfi.org.uk/search/rentals?q=&sort=title&page=1

请注意,页码会更改,以1为增量。

我正在努力退还或确定标题。

from requests import get
url = 'https://player.bfi.org.uk/search/rentals?q=&sort=title&page=1'
response = get(url)
from bs4 import BeautifulSoup
html_soup = BeautifulSoup(response.text, 'lxml')
type(html_soup)
movie_containers = html_soup.find_all('div', class_ = 'card card--rentals')
print(type(movie_containers))
print(len(movie_containers))

我也尝试遍历它们:

for dd in page("div.card__content"):
    print(div.select_one("h3.card__title").text.strip())

任何帮助都会很棒。

谢谢

我希望每页的每部电影的标题都可以得到结果,包括指向电影的链接。例如。 https://player.bfi.org.uk/rentals/film/watch-akenfield-1975-online

2 个答案:

答案 0 :(得分:1)

该页面正在通过xhr将内容加载到另一个URL,因此您缺少此内容。您可以模仿页面使用的xhr POST请求,并更改发送的json帖子。如果您更改size,则会获得更多结果。

import requests

data = {"size":1480,"from":0,"sort":"sort_title","aggregations":{"genre":{"terms":{"field":"genre.raw","size":10}},"captions":{"terms":{"field":"captions"}},"decade":{"terms":{"field":"decade.raw","order":{"_term":"asc"},"size":20}},"bbfc":{"terms":{"field":"bbfc_rating","size":10}},"english":{"terms":{"field":"english"}},"audio_desc":{"terms":{"field":"audio_desc"}},"colour":{"terms":{"field":"colour"}},"mono":{"terms":{"field":"mono"}},"fiction":{"terms":{"field":"fiction"}}},"min_score":0.5,"query":{"bool":{"must":{"match_all":{}},"must_not":[],"should":[],"filter":{"term":{"pillar.raw":"rentals"}}}}}
r = requests.post('https://search-es.player.bfi.org.uk/prod-films/_search', json = data).json()
for film in r['hits']['hits']:
    print(film['_source']['title'], 'https://player.bfi.org.uk' + film['_source']['url'])

rentals的实际结果计数在json r['hits']['total']中,因此您可以发出初始请求,并从比预期高得多的数字开始,检查是否需要另一个请求,然后然后通过更改fromsize来收集所有多余的款项。

import requests
import pandas as pd

initial_count = 10000
results = []

def add_results(r):
    for film in r['hits']['hits']:
        results.append([film['_source']['title'], 'https://player.bfi.org.uk' + film['_source']['url']])

with requests.Session() as s:
    data = {"size": initial_count,"from":0,"sort":"sort_title","aggregations":{"genre":{"terms":{"field":"genre.raw","size":10}},"captions":{"terms":{"field":"captions"}},"decade":{"terms":{"field":"decade.raw","order":{"_term":"asc"},"size":20}},"bbfc":{"terms":{"field":"bbfc_rating","size":10}},"english":{"terms":{"field":"english"}},"audio_desc":{"terms":{"field":"audio_desc"}},"colour":{"terms":{"field":"colour"}},"mono":{"terms":{"field":"mono"}},"fiction":{"terms":{"field":"fiction"}}},"min_score":0.5,"query":{"bool":{"must":{"match_all":{}},"must_not":[],"should":[],"filter":{"term":{"pillar.raw":"rentals"}}}}}
    r = s.post('https://search-es.player.bfi.org.uk/prod-films/_search', json = data).json()
    total_results = int(r['hits']['total'])
    add_results(r)

    if total_results > initial_count :
        data['size'] = total_results - initial_count
        data['from'] = initial_count
        r = s.post('https://search-es.player.bfi.org.uk/prod-films/_search', json = data).json()
        add_results(r)

df = pd.DataFrame(results, columns = ['Title', 'Link'])
print(df.head())

答案 1 :(得分:0)

您遇到的问题实际上与找到div无关-我认为您做得正确。但是,当您尝试使用

访问网站时
from requests import get
url = 'https://player.bfi.org.uk/search/rentals?q=&sort=title&page=1'
response = get(url)

响应实际上并不包括您在浏览器中看到的所有内容。您可以使用'card' in response == False检查这种情况。这很可能是因为网站加载后,所有卡都通过javascript加载,因此仅使用requests库加载基本内容不足以获取您要抓取的所有信息。

我建议您尝试查看网站如何加载所有卡-浏览器开发工具中的“网络”标签可能会有所帮助。