我需要在表格中填入条目,这些条目可以在HTML页面中的soup.findAll('table',{'id':'taxHistoryTable'})中找到。现在我需要在这个汤中创建一个与此类似的指针,以便我可以在
中获取值<table id="taxHistoryTable" class="view-history responsive-table yui3-toggle-content-minimized ceilingless"><thead>
<tr><th class="year">Year</th>
<th class="numeric property-taxes">Property taxes</th>
<th class="numeric">Change</th><th class="numeric tax-assessment">Tax assessment</th>
<th class="numeric">Change</th></tr></thead><tfoot>
<tr><td colspan="5"><span class="yui3-toggle-content-link-block"><a href="#" class="yui3-toggle-content-link">
<span class="maximize">More</span><span class="minimize">Fewer</span></a></span></td> </tr></tfoot><tbody>
<tr class="alt"><td>2011</td><td class="numeric">$489</td><td class="numeric"><span class="delta-value"><span class="inc">-81.8%</span></span></td>
<td class="numeric">$34,730</td>
<td class="numeric"><span class="delta-value"><span class="inc">-6.9%</span></span> </td></tr><tr>
<td>2010</td><td class="numeric">$2,683</td><td class="numeric"><span class="delta-value"><span class="dec">177%</span></span></td><td class="numeric">$37,300</td><td class="numeric"><span class="delta-value"><span class="dec">98.7%</span></span></td></tr><tr class="alt"><td>2009</td><td class="numeric">$969</td><td class="numeric"><span class="delta-value">--</span></td><td class="numeric">$18,770</td><td class="numeric"><span class="delta-value">--</span></td></tr><tr class="minimize"><td>2008</td><td class="numeric">$0</td><td class="numeric"><span class="delta-value">--</span></td><td class="numeric">$18,770</td><td class="numeric"><span class="delta-value">--</span></td></tr></tbody></table>
在表类入口taxHistoryTable中。我编写了2个循环来准确识别这些位置,然后尝试将其分配给变量名称,然后将其写入CSV文件。
page = urllib2.urlopen(houselink).read() #opening link
soup = BeautifulSoup(page) #parsing link
address = soup.find('h1',{'class':'prop-addr'}) #finding html address of house address
price = soup.find('h2',{'class':'prop-value-price'}) #finding html address of price info, find used to find only instance of price
price1 = price.find('span',{'class':'value'}) #Had to do this as price address was not unique at granular level, used upper level to identify it
#Price address was not unique becuase of presence of Zestimate price also on page
bedroom = soup.findAll('span',{'class':'prop-facts-value'})[0]
bathroom = soup.findAll('span',{'class':'prop-facts-value'})[1]
#zestimate
zestimate = soup.findAll('td',{'class':'zestimate'})[1]
#tax
loop1 = soup.findAll('table',{'id':'taxHistoryTable'})
for form1 in loop1:
loop2=form1.findAll('tr',{'class':'alt'})
for form2 in loop2:
#year1=form2.find('td')[0]
tax1=form2.find('td',{'class':'numeric'})[0]
percent1=form2.find('span',{'class':'inc'})[0]
asses1=form2.find('td',{'class':'numeric'})[1]
precent2=form2.find('span',{'class':'inc'})[1]
try:
q_cleaned = unicode(u' '.join(zestimate.stripped_strings)).encode('utf8').strip()
except AttributeError:
q_cleaned = ""
try:
r_cleaned = unicode(u' '.join(tax1.stripped_strings)).encode('utf8').strip()
except AttributeError:
r_cleaned = ""
try:
s_cleaned = unicode(u' '.join(percent1.stripped_strings)).encode('utf8').strip()
except AttributeError:
s_cleaned = ""
try:
t_cleaned = unicode(u' '.join(asses1.stripped_strings)).encode('utf8').strip()
except AttributeError:
t_cleaned = ""
try:
u_cleaned = unicode(u' '.join(percent2.stripped_strings)).encode('utf8').strip()
except AttributeError:
u_cleaned = ""
spamwriter.writerow([a_cleaned,b_cleaned,d_cleaned,e_cleaned,f_cleaned,g_cleaned,h_cleaned,i_cleaned,j_cleaned,k_cleaned,l_cleaned,m_cleaned,n_cleaned,o_cleaned,p_cleaned,coordinates,q_cleaned,r_cleaned,s_cleaned,t_cleaned,u_cleaned]) #writing row for that address price combination
我正在处理的实际代码非常长,所以我只包含了特定于错误的“UnboundLocalError:本地变量'tax1'在分配之前引用”的部分。我正在接收。
有人可以帮助我理解如何分配这些变量,使得循环完成后这些变量中的值可用。
答案 0 :(得分:0)
您在zestimate
之后尝试找到的元素,如tax
等,都是来自urllib2
响应的空标记。简而言之,如果您使用urllib2或mechanize发出请求,loop1 = soup.findAll('table',{'id':'taxHistoryTable'})
将找不到任何内容,因为其父级是空的div
标记。所以你的代码肯定不会起作用。
要收集完整的HTML source_code,你可以在浏览器中看到,你需要一个可以处理javascript等工具的工具,并且像真正的浏览器一样运行,然后你可以Selenium ghost.py phantomjs .. .etc。
顺便说一句,因为你正在试图抓住zillow。在启动机器人之前,最好先检查一下它们的API。祝好运。