用美丽的汤

时间:2017-01-28 09:51:24

标签: python beautifulsoup

我正在尝试提取一个非常深层嵌套的href。结构如下所示:

<div id="main">
 <ol>
   <li class>
     <div class>
       <div class>
         <a class>
         <h1 class="title entry-title">
           <a href="http://wwww.link_i_want_to_extract.com">
           <span class>
         </h1>
        </div>
       </div>
     </li>

然后还有一堆其他<li class>内有hrefs。 所以基本上是子订单的父级是

li - div - div - h1 - a href

我尝试了以下内容:

soup.select('li div div h1')

soup.find_all("h1", { "class" : "title entry-title" }) 

for item in soup.find_all("h1", attrs={"class" : "title entry-title"}):
        for link in item.find_all('a',href=TRUE):

这些似乎都不起作用,我得到[]或空.txt个文件。

另外,更令人不安的是,在定义soup然后我print(soup)后,我看不到嵌套类,我只看到顶部的那个,<div id=main>以及执行print soup.l不会检索l类。我认为Beautifulsoup不会识别l类和其他类。

4 个答案:

答案 0 :(得分:2)

这对我有用

from bs4 import BeautifulSoup

html = '''
<div id="main">
   <ol>
      <li class>
         <div class>
            <div class>
               <a class>
               <h1 class="title entry-title">
                  <a href="http://www.link_i_want_to_extract.com">
                  <span class>
               </h1>
            </div>
         </div>
      </li>
      <li class>
         <div class>
            <div class>
               <a class>
               <h1 class="title entry-title">
                  <a href="https://other_link_i_want_to_extract.net">
                  <span class>
               </h1>
            </div>
         </div>
      </li>
   </ol>
</div>
'''

soup = BeautifulSoup(html, "lxml")
for h1 in soup.find_all('h1', class_="title entry-title"):
    print(h1.find("a")['href'])

答案 1 :(得分:1)

你有一个拼写错误:href=TRUE,应该是href=True

s = """
<div id="main">
   <ol>
      <li class>
         <div class>
            <div class>
               <a class>
               <h1 class="title entry-title">
                  <a href="http://www.link_i_want_to_extract.com">
                  <span class>
               </h1>
            </div>
         </div>
      </li>
      <li class>
         <div class>
            <div class>
               <a class>
               <h1 class="title entry-title">
                  <a href="https://other_link_i_want_to_extract.net">
                  <span class>
               </h1>
            </div>
         </div>
      </li>
   </ol>
</div>
"""

from bs4 import BeautifulSoup
soup = BeautifulSoup(s, 'html.parser')

for item in soup.find_all("h1", attrs={"class" : "title entry-title"}):
    for link in item.find_all('a',href=True):
        print('bs link:', link['href'])

或者您可以使用pyQuery,它提供js / jquery,例如查询语法:

from pyquery import PyQuery as pq
from lxml import etree

d = pq(s)
for link in d('h1.title.entry-title > a'):
    print('pq link:', pq(link).attr('href'))

返回:

bs link: http://www.link_i_want_to_extract.com
bs link: https://other_link_i_want_to_extract.net
pq link: http://www.link_i_want_to_extract.com
pq link: https://other_link_i_want_to_extract.net

答案 2 :(得分:0)

使用.找到第一个后代:

soup.find('div', id="main").h1.a['href']

或使用h1作为锚:

soup.find("h1", { "class" : "title entry-title" }).a['href']

答案 3 :(得分:0)

一种简单的方法:

soup.select('a[href]')

或:

soup.findAll('a', href=True)