我正在使用Python和Beautiful Soup解析内容,然后将其写入CSV文件,并且遇到了获取某组数据的问题。数据是通过我制作的TidyHTML的实现运行的,然后剥离了其他不需要的数据。
问题是我需要检索一组<h3>
标记之间的所有数据。
示例数据:
<h3><a href="Vol-1-pages-001.pdf">Pages 1-18</a></h3>
<ul><li>September 13 1880. First regular meeting of the faculty;
September 14 1880. Discussion of curricular matters. Students are
debarred from taking algebra until they have completed both mental
and fractional arithmetic; October 4 1880.</li><li>All members present.</li></ul>
<ul><li>Moved the faculty henceforth hold regular weekkly meetings in the
President's room of the University building; 11 October 1880. All
members present; 18 October 1880. Regular meeting 2. Moved that the
President wait on the property holders on 12th street and request
them to abate the nuisance on their property; 25 October 1880.
Moved that the senior and junior classes for rhetoricals be...</li></ul>
<h3><a href="Vol-1-pages-019.pdf">Pages 19-33</a></h3>`
我需要检索第一个结束</h3>
标记和下一个开始<h3>
标记之间的所有内容。这应该不难,但是我的厚头没有做必要的连接。我可以抓取所有<ul>
代码,但这不起作用,因为<h3>
代码和<ul>
代码之间没有一对一的关系。
我希望实现的输出是:
第1-18页| Vol-1-pages-001.pdf |介于和标签之间的内容。
前两个部分并不是问题,但一组标签之间的内容对我来说很难。
我目前的代码如下:
import glob, re, os, csv
from BeautifulSoup import BeautifulSoup
from tidylib import tidy_document
from collections import deque
html_path = 'Z:\\Applications\\MAMP\\htdocs\\uoassembly\\AssemblyRecordsVol1'
csv_path = 'Z:\\Applications\\MAMP\\htdocs\\uoassembly\\AssemblyRecordsVol1\\archiveVol1.csv'
html_cleanup = {'\r\r\n':'', '\n\n':'', '\n':'', '\r':'', '\r\r': '', '<img src="UOSymbol1.jpg" alt="" />':''}
for infile in glob.glob( os.path.join(html_path, '*.html') ):
print "current file is: " + infile
html = open(infile).read()
for i, j in html_cleanup.iteritems():
html = html.replace(i, j)
#parse cleaned up html with Beautiful Soup
soup = BeautifulSoup(html)
#print soup
html_to_csv = csv.writer(open(csv_path, 'a'), delimiter='|',
quoting=csv.QUOTE_NONE, escapechar=' ')
#retrieve the string that has the page range and file name
volume = deque()
fileName = deque()
summary = deque()
i = 0
for title in soup.findAll('a'):
if title['href'].startswith('V'):
#print title.string
volume.append(title.string)
i+=1
#print soup('a')[i]['href']
fileName.append(soup('a')[i]['href'])
#print html_to_csv
#html_to_csv.writerow([volume, fileName])
#retrieve the summary of each archive and store
#for body in soup.findAll('ul') or soup.findAll('ol'):
# summary.append(body)
for body in soup.findAll('h3'):
body.findNextSibling(text=True)
summary.append(body)
#print out each field into the csv file
for c in range(i):
pages = volume.popleft()
path = fileName.popleft()
notes = summary
if not summary:
notes = "help"
if summary:
notes = summary.popleft()
html_to_csv.writerow([pages, path, notes])
答案 0 :(得分:0)
如果您尝试在lxml中的<ul><li></ul></li>
标记之间提取数据,它提供了使用CSSSelector
import lxml.html
import urllib
data = urllib.urlopen('file:///C:/Users/ranveer/st.html').read() //contains your html snippet
doc = lxml.html.fromstring(data)
elements = doc.cssselect('ul li') // CSSpath[using firebug extension]
for element in elements:
print element.text_content()
执行上述代码后,您将获得ul,li
代码之间的所有文字。它比美丽的汤更清洁。
如果您有任何机会计划使用lxml,那么您可以通过以下方式评估XPath表达式 -
import lxml
from lxml import etree
content = etree.HTML(urllib.urlopen("file:///C:/Users/ranveer/st.html").read())
content_text = content.xpath("html/body/h3[1]/a/@href | //ul[1]/li/text() | //ul[2]/li/text() | //h3[2]/a/@href")
print content_text
您可以根据需要更改XPath。
答案 1 :(得分:0)
在</h3>
和<h3>
代码之间提取内容:
from itertools import takewhile
h3s = soup('h3') # find all <h3> elements
for h3, h3next in zip(h3s, h3s[1:]):
# get elements in between
between_it = takewhile(lambda el: el is not h3next, h3.nextSiblingGenerator())
# extract text
print(''.join(getattr(el, 'text', el) for el in between_it))
代码假定所有<h3>
元素都是兄弟元素。如果不是这样,那么您可以使用h3.nextGenerator()
代替h3.nextSiblingGenerator()
。