(网页抓取)我已经找到了正确的标签,现在如何提取文本?

时间:2019-10-02 06:39:23

标签: python python-3.x web-scraping beautifulsoup scrape

我正在创建我的第一个网络抓取应用程序,该应用程序收集https://store.steampowered.com/上“新趋势”标签上当前的游戏标题。确定方法后,我想对价格重复该过程,然后将二者导出到电子表格的单独列中。

我已经成功找到了包含我要提取的文本(标题)的标签,但是我不确定一旦找到它们的容器,如何提取标题。

from urllib.request import urlopen
from bs4 import BeautifulSoup

my_url = 'https://store.steampowered.com/'
uClient = urlopen(my_url)
page_html = uClient.read()
uClient.close()

page_soup = BeautifulSoup(page_html, "html.parser")
containers = page_soup.findAll("div",{"class":"tab_item_name"}, limit=10)

for titles in containers:
    print(titles)

我想做的是使用for循环在垂直列表中打印出现在Steam主页上的10个游戏的名称。实际发生的是我打印出包含标题的标签:

<div class="tab_item_name">Destiny 2: Shadowkeep</div>
<div class="tab_item_name">Destiny 2</div>
<div class="tab_item_name">Destiny 2: Forsaken</div>
<div class="tab_item_name">Destiny 2: Shadowkeep Digital Deluxe Edition</div>
<div class="tab_item_name">NGU IDLE</div>
<div class="tab_item_name">Kaede the Eliminator / Eliminator 小枫</div>
<div class="tab_item_name">Spaceland</div>
<div class="tab_item_name">Cube World</div>
<div class="tab_item_name">Aokana - Four Rhythms Across the Blue</div>
<div class="tab_item_name">CODE VEIN</div>

3 个答案:

答案 0 :(得分:0)

read the documentation

  

如果只需要文档或标签的文本部分,则可以使用get_text()方法。它以单个Unicode字符串的形式返回文档中或标签下的所有文本。

那就这样做吧

# Should be `title` IMO, because you are currently handling a single title
for titles in containers:
    print(titles.get_text())

答案 1 :(得分:0)

使用titles.text或什至titles.get_text(),无论您喜欢哪个标题文本,都如下所示:

from urllib.request import urlopen
from bs4 import BeautifulSoup

my_url = 'https://store.steampowered.com/'
uClient = urlopen(my_url)
page_html = uClient.read()
uClient.close()

page_soup = BeautifulSoup(page_html, "html.parser")
containers = page_soup.findAll("div",{"class":"tab_item_name"}, limit=11)

for titles in containers:
    print(titles.text)

答案 2 :(得分:0)

另一种非常方便的方法是使用lxml

import requests
import lxml.html

url = 'https://store.steampowered.com/'
# Make the request
response = requests.get(url=url, timeout=5)
# Parse tree
tree = lxml.html.fromstring(response.text)
# Select section corresponding to new games
sub_tree = tree.get_element_by_id('tab_newreleases_content')
# Extract data
games_list = [a.text_content() for a in sub_tree.find_class('tab_item_name')]

# Check
for game in games_list[:11]:
    print(game)
# Destiny 2: Shadowkeep
# Destiny 2
# Destiny 2: Forsaken
# Destiny 2: Shadowkeep Digital Deluxe Edition
# NGU IDLE
# Fernbus Simulator - MAN Lion's Intercity
# Euro Truck Simulator 2 - Pink Ribbon Charity Pack
# Spaceland
# Cube World
# CODE VEIN
# CODE VEIN