使用python根据标题标签自动生成嵌套目录

时间:2019-03-12 14:42:48

标签: python regex html-parsing nested-lists

我正在尝试基于HTML的标题标签创建嵌套的内容表。

我的HTML文件:

<html>
<head>
  <META http-equiv="Content-Type" content="text/html; charset=UTF-8">
</head>
<body>
  <h1>
            My report Name
  </h1>
  <h1 id="2">First Chapter                          </h1>
  <h2 id="3"> First Sub-chapter of the first chapter</h2>
  <ul>
    <h1 id="text1">Useless h1</h1>
    <p>
      some text
    </p>
  </ul>
  <h2 id="4">Second Sub-chapter of the first chapter </h2>
  <ul>
    <h1 id="text2">Useless h1</h1>
    <p>
      some text
    </p>
  </ul>
  <h1 id="5">Second Chapter                          </h1>
  <h2 id="6">First Sub-chapter of the Second chapter </h2>
  <ul>
    <h1 id="text6">Useless h1</h1>
    <p>
      some text
    </p>
  </ul>
  <h2 id="7">Second Sub-chapter of the Second chapter </h2>
  <ul>
    <h1 id="text6">Useless h1</h1>
    <p>
      some text
    </p>
  </ul>
</body>
</html>

我的python代码:

import from lxml import html
from bs4 import BeautifulSoup as soup
import re
import codecs
#Access to the local URL(Html file)
f = codecs.open("C:\\x\\test.html", 'r')
page = f.read()
f.close()
#html parsing
page_soup = soup(page,"html.parser")
tree = html.fromstring(page)#extract report name
ref = page_soup.find("h1",{"id": False}).text.strip()
print("the name of the report is : " + ref + " \n")

chapters = page_soup.findAll('h1', attrs={'id': re.compile("^[0-9]*$")})
print("We have " + str(len(chapters)) + " chapter(s)")
for index, chapter in enumerate(chapters):
    print(str(index+1) +"-" + str(chapter.text.strip()) + "\n")

sub_chapters = page_soup.findAll('h2', attrs={'id': re.compile("^[0-9]*$")})
print("We have " + str(len(sub_chapters)) + " sub_chapter(s)")
for index, sub_chapter in enumerate(sub_chapters):
    print(str(index+1) +"-" +str(sub_chapter.text.strip()) + "\n")

使用此代码,我可以获得所有章节和所有子章节,但这不是我的目标。

我的目标是将以下内容作为目录:

1-First Chapter
    1-First sub-chapter of the first chapter
    2-Second sub-chapter of the first chapter
2-Second Chapter    
    1-First sub-chapter of the Second chapter
    2-Second sub-chapter of the Second chapter

关于如何实现所需的目录格式的任何建议或想法?

1 个答案:

答案 0 :(得分:1)

如果您愿意将HTML布局更改为类似于以下内容:

<html>

<head>
  <META http-equiv="Content-Type" content="text/html; charset=UTF-8">
</head>

<body>
  <article>
    <h1>
      My report Name
    </h1>
    <section>
      <h2 id="chapter-one">First Chapter</h2>
      <section>
        <h3 id="one-one"> First Sub-chapter of the first chapter</h3>
        <ul>
          <h4 id="text1">Useless h4</h4>
          <p>
            some text
          </p>
        </ul>
      </section>
      <section>
        <h3 id="one-two">Second Sub-chapter of the first chapter</h3>
        <ul>
          <h4 id="text2">Useless h4</h4>
          <p>
            some text
          </p>
        </ul>
      </section>
    </section>
    <section>
      <h2 id="chapter-two">Second Chapter </h2>
      <section>
        <h3 id="two-one">First Sub-chapter of the Second chapter</h3>
        <ul>
          <h4 id="text6">Useless h4</h4>
          <p>
            some text
          </p>
        </ul>
      </section>
      <section>
        <h3 id="two-two">Second Sub-chapter of the Second chapter</h3>
        <ul>
          <h4 id="text6">Useless h4</h4>
          <p>
            some text
          </p>
        </ul>
      </section>
    </section>
  </article>
</body>

</html>

然后您的Python代码变得更加简单:

from lxml import html
from bs4 import BeautifulSoup as soup
import re
import codecs

#Access to the local URL(Html file)
with codecs.open("index.html", 'r') as f:
  page = f.read()

#html parsing
page_soup = soup(page,"html.parser")
tree = html.fromstring(page)#extract report name
ref = page_soup.find("h1").text.strip()
print("the name of the report is : " + ref + " \n")

chapters = page_soup.findAll('h2')
for index, chapter in enumerate(chapters):
    print(str(index+1) +"-" + str(chapter.text.strip()))
    sub_chapters = chapter.find_parent().find_all("h3")
    for index2, sub_chapter in enumerate(sub_chapters):
       print("\t" + str(index2+1) +"-" +str(sub_chapter.text.strip()))

我对页面阅读代码进行了一些更新,并尝试在更新的脚本中使用更多惯用的python。

另外,请注意:

sub_chapters = chapter.find_parent().find_all("h3")

find_all相对于本章的上级,而不是整个文档