如何使用Python从网页获取超链接的文本元素?

时间:2018-12-06 23:48:09

标签: python html web-scraping python-requests lxml

我正在抓取Web数据,仅需要返回与超链接关联的text元素。超链接和文本未知。该类是已知的。这是HTML示例:

<div class="a-column SsCol" role = "gridcell">
    <h3 class="a-spacing-none SsName">
        <span class="a-size-medium a-text-bold">
            <a href="/gp/aag/main/ref=sm_name_2?ie=UTF8&ids=15112acd">Direct Name</a>
        </span>
    </h3>
</div>

或者,所需的文本可以与图像而不是超链接相关联:

<div class="a-column SsCol" role = "gridcell">
    <h3 class="a-spacing-none SsName">
            <img alt="Direct Name" src="https://images-hosted.com//01x-j.gi">
    </h3>
</div>

我尝试了以下方法:

from lxml import html
import requests
response = requests.get('https://www.exampleurl.com/')
doc = html.fromstring(response.content)
text1 = doc.xpath("//*[contains(@class, 'SsName')]/text()")

我使用的是lxml而不是BeautifulSoup,但是如果建议的话,我愿意进行切换。 理想的结果是:

print(text1)
['Direct Name']

2 个答案:

答案 0 :(得分:1)

//*[contains(@alt, '')]/@alt查找所有具有alt元素的标签。实际上,此xpath是从XPath Query: get attribute href from a tag扩展的。然后,您可以选择特定的标签,如我的text2所示

from lxml import html

text = """
<div class="a-column SsCol" role = "gridcell">
    <h3 class="a-spacing-none SsName">
        <span class="a-size-medium a-text-bold">
            <a href="/gp/aag/main/ref=sm_name_2?ie=UTF8&ids=15112acd">Direct Name</a>
        </span>
    </h3>
</div>
<div class="a-column SsCol2" role = "gridcell">
    <h3 class="a-spacing-none SsName">
            <img alt="Direct Name" src="https://images-hosted.com//01x-j.gi">
    </h3>
</div>

"""

doc = html.fromstring(text)
text1 = doc.xpath("//*[contains(@alt, '')]/@alt")
print(text1)
text2 = doc.xpath("//div[contains(@class, 'a-column SsCol2')]//*[contains(@alt, '')]/@alt")
print(text2)

答案 1 :(得分:0)

我一定会试一试《美丽汤》:

from bs4 import BeautifulSoup
soup = BeautifulSoup(html_doc, 'html.parser')

在结构上导航的一些常用方法

soup.title
# <title>The Dormouse's story</title>

soup.title.name
# u'title'

soup.title.string
# u'The Dormouse's story'

soup.title.parent.name
# u'head'

soup.p
# <p class="title"><b>The Dormouse's story</b></p>

soup.p['class']
# u'title'

soup.a
# <a class="sister" href="http://example.com/elsie" id="link1">Elsie</a>

soup.find_all('a')
# [<a class="sister" href="http://example.com/elsie" id="link1">Elsie</a>,
#  <a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>,
#  <a class="sister" href="http://example.com/tillie" id="link3">Tillie</a>]

soup.find(id="link3")
# <a class="sister" href="http://example.com/tillie" id="link3">Tillie</a>

一项常见任务是提取页面标记中找到的所有URL:

for link in soup.find_all('a'):
    print(link.get('href'))
# http://example.com/elsie
# http://example.com/lacie

另一个常见任务是从页面中提取所有文本:

print(soup.get_text())
# The Dormouse's story
#
# The Dormouse's story
#
# Once upon a time there were three little sisters; and their names were
# Elsie,
# Lacie and...

如果您还需要其他任何内容,可能需要查看其文档: Beautiful Soup