我正在学习Pyhton和特别美丽的汤,我正在使用包含不同年份流行婴儿名称的html文件集(例如baby1990.html等)在Regex上进行谷歌练习。如果您对此感兴趣,可以找到此数据集:https://developers.google.com/edu/python/exercises/baby-names
每个html文件都包含一个包含婴儿姓名数据的表格,如下所示:
我编写了一个函数,从html文件中提取婴儿名称,并将它们存储到数据帧,字典中的数据帧以及在单个数据帧中聚合的所有数据帧。
每个html文件中有两个表。包含婴儿数据的表格具有以下HTML代码:
border = cv2.copyMakeBorder(mask, 1, 1, 1, 1, cv2.BORDER_CONSTANT, value=0 )
_, contours, hierarchy = cv2.findContours(border, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE, offset=(-1, -1))
在这一行中,独特属性是摘要="格式化"。
我写的函数是根据收到的反馈进行编辑的,如下:
<table width="100%" border="0" cellspacing="0" cellpadding="4" summary="formatting">
当我运行具有给定路径的函数(我存储html文件的目录的路径)时,我收到以下错误消息:
def babynames(path):
# This function takes the path of the directory where the html files are stored and returns a list containing the
# a dataframe which encompasses all the tabular baby-names data in the files and as well as a dictionary holding
# a separate dataframe for each html file
# 0: Initialize objects
dicnames = {} # will hold the dataframes containing the tabular data of each year
dfnames = pd.DataFrame([]) # will hold the aggregate data
# 1: Create a list containing the full paths of the baby files in the directory indicated by the path argument of the babynames
# function
allfiles = files(path)
# 2: Begin the looping through the files
for file in allfiles:
with open(file,"r") as f: soup = bs(f.read(), 'lxml') # Convert the file to a soup
# 3. Initialize empty lists to hold the contents of the cells
Rank=[]
Baby_1 =[]
Baby_2 =[]
df = pd.DataFrame([])
# 4. Extract the Table containing the Baby data and loop through the rows of this table
for row in soup.select("table[summary=formatting] tr"):
# 5. Extract the cells
cells = row.findAll("td")
# 6. Convert to text and append to lists
try:
Rank.append(cells[0].find(text=True))
Baby_1.append(cells[1].find(text=True))
Baby_2.append(cells[2].find(text=True))
except:
print "file: " , file
try:
print "cells[0]: " , cells[0]
except:
print "cells[0] : NaN"
try:
print "cells[1]: " , cells[1]
except:
print "cells[1] : NaN"
try:
print "cells[2]: " , cells[2]
except:
print "cells[2] : NaN"
# 7. Append the lists to the empty dataframe df
df["Rank"] = Rank
df["Baby_1"] = Baby_1
df["Baby_2"] = Baby_2
# 8. Append the year to the dataframe as a separate column
df["Year"] = extractyear(file) # Call the function extractyear() defined in the environment with input
# the full pathname stored in variable file and examined in the current
# iteration
# 9. Rearrange the order of columns
# df.columns.tolist() = ['Year', 'Rank', 'Baby_1', 'Baby_2']
#10. Store the dataframe to a dictionary as the value which key is the name of the file
pattern = re.compile(r'.*(baby\d\d\d\d).*')
filename = re.search(pattern, file).group(1)
dicnames[filename] = df
# 11. Combine the dataframes stored in the dictionary dicname to an aggregate dataframe dfnames
for key, value in dicnames.iteritems():
dfnames = pd.concat[dfnames, value]
# 12. Store the dfnames and dicname in a list called result. Return result.
result = [dfnames, dicnames]
return result
输出:
result = babynames(path)
单元格[0],单元格1和单元格[2]应该有值。
正如我所提到的,还有一个表格由以下html代码标识:
---------------------------------------------------------------------------
file: C:/Users/ALEX/MyFiles/JUPYTER NOTEBOOKS/google-python-exercises/babynames/baby1990.html
cells[0]: cells[0] : NaN
cells[1]: cells[1] : NaN
cells[2]: cells[2] : NaN
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-72-5c9ebdc4dcdb> in <module>()
----> 1 result = babynames(path)
<ipython-input-71-a0263a6790da> in babynames(path)
54
55 # 7. Append the lists to the empty dataframe df
---> 56 df["Rank"] = Rank
57 df["Baby_1"] = Baby_1
58 df["Baby_2"] = Baby_2
C:\users\alex\Anaconda2\lib\site-packages\pandas\core\frame.pyc in __setitem__(self, key, value)
2355 else:
2356 # set column
-> 2357 self._set_item(key, value)
2358
2359 def _setitem_slice(self, key, value):
C:\users\alex\Anaconda2\lib\site-packages\pandas\core\frame.pyc in _set_item(self, key, value)
2421
2422 self._ensure_valid_index(value)
-> 2423 value = self._sanitize_column(key, value)
2424 NDFrame._set_item(self, key, value)
2425
C:\users\alex\Anaconda2\lib\site-packages\pandas\core\frame.pyc in _sanitize_column(self, key, value)
2576
2577 # turn me into an ndarray
-> 2578 value = _sanitize_index(value, self.index, copy=False)
2579 if not isinstance(value, (np.ndarray, Index)):
2580 if isinstance(value, list) and len(value) > 0:
C:\users\alex\Anaconda2\lib\site-packages\pandas\core\series.pyc in _sanitize_index(data, index, copy)
2768
2769 if len(data) != len(index):
-> 2770 raise ValueError('Length of values does not match length of ' 'index')
2771
2772 if isinstance(data, PeriodIndex):
ValueError: Length of values does not match length of index
我运行了一个我没有指定表的函数版本 - 我没有注意到html文件中有两个表。在那个版本中,我没有遇到这种类型的错误。我有第6行的错误消息,即try语句的标识不正确 - 我不明白 - 以及第9行的错误消息,我试图重新排列数据框的列 - 我也无法理解。< / p>
您的建议将不胜感激。
答案 0 :(得分:2)
def findTxt(self,e):
global wordPos
newstring = self.logTxt.GetValue()
for i in range(self.progressBox.GetNumberOfLines()):
line = self.progressBox.GetLineText(i)
if newstring in line:
startPos = self.progressBox.GetValue().find(newstring)
endPos = startPos + len(newstring)
wordPos = endPos
self.progressBox.Bind(wx.EVT_SET_FOCUS, self.highlightText(startPos, endPos))
startPos = 0
self.findNextBtn.Enable()
def findNext(self,e):
global wordPos
newstring = self.logTxt.GetValue()
count = 0
for i in range(self.progressBox.GetNumberOfLines()):
if count == 0:
line = self.progressBox.GetValue()
if newstring in line:
startPos = self.progressBox.GetValue().find(newstring, wordPos)
endPos = startPos + len(newstring)
wordPos = endPos
self.progressBox.Bind(wx.EVT_SET_FOCUS, self.highlightText(startPos, endPos))
count = 1
def highlightText(self, pos, size):
self.progressBox.SetStyle(pos, size, wx.TextAttr("black", "turquoise"))
self.progressBox.SetInsertionPoint(pos)
是right_table
个实例(基本上是代表元素的ResultSet
个实例的列表),它没有Tag
或findAll()
方法。< / p>
相反,如果你有多个元素,你可以循环遍历find_all()
中的元素:
right_table
或者,如果只有一个,请使用right_table = soup.find_all("table", summary_ = "formatting")
for table in right_table:
for row in table.findAll("tr"):
# ...
:
find()
或者,使用单个 CSS选择器:
right_table = soup.find("table", summary_ = "formatting")