如何循环遍历BS4数据并正确打印div标签

时间:2019-01-31 02:35:30

标签: python beautifulsoup

我正在尝试使用BS4复制具有特定类“ chapter_header_styling”的HTML页面中的所有数据。

当我手动输入URL时,此方法有效-但在有多本书和不同章节的情况下,这很繁琐。因此,我然后创建了另一个脚本,该脚本将生成该书的所有章节URL,并将它们组合成文本文件bchap.txt(书的章节)。

从那时起,我更改了文件并添加了各种断点,因此请忽略我缺少注释和未使用的数组/列表的情况。我将其范围缩小到###Comment##,在此行不通。它可能嵌套不正确,但是我不确定...我已经做到了这一点,但无法弄清楚为什么它不能将mydivs数据粘贴到book.html文件中。如果有更多经验的人可以向我指出正确的方向,将不胜感激。

#mkbook.py
# coding: utf-8
from bs4 import BeautifulSoup
import requests

LINK = "https://codes.iccsafe.org/content/FAC2017"
pop = ""
#z = ""
chapters = open("bchap.txt",'r')
a = []
for aline in chapters:
  chap = aline
  #print (chap)
  #pop = ""
  pop = LINK+chap
  #print (pop)
  r = requests.get(pop)
  data = r.text
  #print(data)

  soup = BeautifulSoup(data, 'html.parser')

  mydivs = soup.findAll("div", {"class": ["annotator", "chapter_header_styling"]})

  f = open("BOOK.html","a")
  f.write("test <br/>")

########################################
#MY PROBLEM IS BELOW NOT PRINTING DIV DATA INTO TXT FILE
########################################
  for div in mydivs:
      print (div)
      z = str(div)
      print(z)  #doesn't printout...why???
      f.write(z)
  print len(mydivs)

  f.close()

chapters.close()



##############################################
## this is the old mkbook.py code before I looped it - inputing url 1 @ time
#
# coding: utf-8
from bs4 import BeautifulSoup
import requests
r = requests.get("https://codes.iccsafe.org/content/FAC2017/preface")
data = r.text
soup = BeautifulSoup(data, 'html.parser')
a = []
mydivs = soup.findAll("div",{"class":["annotator", 
"chapter_header_styling"]})
f = open("BOOK.html","a")
for div in mydivs:
  z = str(div)
  f.write(z)
f.close()
print len(mydivs) #outputs 1 if copied div data.

#######################################
#mkchap.py
# coding: utf-8
from bs4 import BeautifulSoup
import requests
r = requests.get("https://codes.iccsafe.org/content/FAC2017")
data = r.text
soup = BeautifulSoup(data, 'html.parser')
a = []
soup.findAll('option',{"value":True})
list = soup.findAll('option')
with open('bchap.txt', 'w') as filehandle:
  for l in list:
    filehandle.write(l['value'])
    filehandle.write("\n")
    print l['value']
#with open('bchap.txt', 'w') as filehandle:
#   filehandle.write("%s\n" % list)
filehandle.close()

1 个答案:

答案 0 :(得分:0)

问题似乎是您使用错误的基本URL构造了URL。

LINK = "https://codes.iccsafe.org/content/FAC2017"

如果您查看第一个请求,就可以清楚地看到它。

print(pop)
print(r.status_code)

输出:

https://codes.iccsafe.org/content/FAC2017/content/FAC2017

404

运行代码以填充bchap.txt后,其输出为

/content/FAC2017
/content/FAC2017/legend
/content/FAC2017/copyright
/content/FAC2017/preface
/content/FAC2017/chapter-1-application-and-administration
/content/FAC2017/chapter-2-scoping-requirements
/content/FAC2017/chapter-3-building-blocks
/content/FAC2017/chapter-4-accessible-routes
/content/FAC2017/chapter-5-general-site-and-building-elements
/content/FAC2017/chapter-6-plumbing-elements-and-facilities
/content/FAC2017/chapter-7-communication-elements-and-features
/content/FAC2017/chapter-8-special-rooms-spaces-and-elements
/content/FAC2017/chapter-9-built-in-elements
/content/FAC2017/chapter-10-recreation-facilities
/content/FAC2017/list-of-figures
/content/FAC2017/fair-housing-accessibility-guidelines-design-guidelines-for-accessible-adaptable-dwellings
/content/FAC2017/advisory

让我们先更改基本网址,然后重试。

from bs4 import BeautifulSoup
import requests

LINK = "https://codes.iccsafe.org"
pop = ""
chapters = open("bchap.txt",'r')
a = []
for aline in chapters:
  chap = aline
  pop = LINK+chap
  r = requests.get(pop)
  print(pop)
  print(r.status_code)
chapters.close()

输出:

https://codes.iccsafe.org/content/FAC2017

404
...

为什么? \n的b'coz。如果我们做

print(repr(pop))

它将输出

'https://codes.iccsafe.org/content/FAC2017\n'

您还必须剥离该\n。起作用的最终代码是

from bs4 import BeautifulSoup
import requests
LINK = "https://codes.iccsafe.org"
pop = ""
chapters = open("bchap.txt",'r')
a = []
for aline in chapters:
  chap = aline
  pop = LINK+chap
  r = requests.get(pop.strip())
  data = r.text
  soup = BeautifulSoup(data, 'html.parser')
  mydivs = soup.findAll("div", class_="annotator chapter_header_styling")
  f = open("BOOK.html","a")
  for div in mydivs:
      z = str(div)
      f.write(z)
  f.close()
chapters.close()