我正在尝试使用Beautiful Soup从Zillow那里获取住房价格数据。
我按属性ID获取网页,例如。 http://www.zillow.com/homes/for_sale/18429834_zpid/
当我尝试find_all()
函数时,我没有得到任何结果:
results = soup.find_all('div', attrs={"class":"home-summary-row"})
但是,如果我把HTML缩小到我想要的位,例如:
<html>
<body>
<div class=" status-icon-row for-sale-row home-summary-row">
</div>
<div class=" home-summary-row">
<span class=""> $1,342,144 </span>
</div>
</body>
</html>
我得到了2个结果,<div>
和home-summary-row
类。所以,我的问题是,为什么我在搜索整页时没有得到任何结果?
工作示例:
from bs4 import BeautifulSoup
import requests
zpid = "18429834"
url = "http://www.zillow.com/homes/" + zpid + "_zpid/"
response = requests.get(url)
html = response.content
#html = '<html><body><div class=" status-icon-row for-sale-row home-summary-row"></div><div class=" home-summary-row"><span class=""> $1,342,144 </span></div></body></html>'
soup = BeautifulSoup(html, "html5lib")
results = soup.find_all('div', attrs={"class":"home-summary-row"})
print(results)
答案 0 :(得分:5)
您的HTML 格式不正确,在这种情况下,选择正确的解析器至关重要。在BeautifulSoup
中,目前有3个可用的HTML解析器以不同的方式处理损坏的HTML :
html.parser
(内置,无需其他模块)lxml
(最快,需要安装lxml
)html5lib
(最宽松的,需要安装html5lib
)Differences between parsers文档页面更详细地描述了这些差异。在您的情况下,为了证明差异:
>>> from bs4 import BeautifulSoup
>>> import requests
>>>
>>> zpid = "18429834"
>>> url = "http://www.zillow.com/homes/" + zpid + "_zpid/"
>>> response = requests.get(url)
>>> html = response.content
>>>
>>> len(BeautifulSoup(html, "html5lib").find_all('div', attrs={"class":"home-summary-row"}))
0
>>> len(BeautifulSoup(html, "html.parser").find_all('div', attrs={"class":"home-summary-row"}))
3
>>> len(BeautifulSoup(html, "lxml").find_all('div', attrs={"class":"home-summary-row"}))
3
正如您所看到的,在您的情况下,html.parser
和lxml
都可以完成工作,但html5lib
却没有。
答案 1 :(得分:4)
import requests
from bs4 import BeautifulSoup
zpid = "18429834"
url = "http://www.zillow.com/homes/" + zpid + "_zpid/"
r = requests.get(url)
soup = BeautifulSoup(r.content, "lxml")
g_data = soup.find_all("div", {"class": "home-summary-row"})
print g_data[1].text
#for item in g_data:
# print item("span")[0].text
# print '\n'
我也有这个工作 - 但看起来有人打败了我。
无论如何要发帖。
答案 2 :(得分:2)
根据W3.org Validator,HTML存在许多问题,例如杂散结束标记和跨多行分割的标记。例如:
<a
href="http://www.zillow.com/danville-ca-94526/sold/" title="Recent home sales" class="" data-za-action="Recent Home Sales" >
这种标记可以使BeautifulSoup更难以解析HTML。
您可能希望尝试运行某些内容来清理HTML,例如从每行末尾删除换行符和尾随空格。 BeautifulSoup还可以为您清理HTML树:
from BeautifulSoup import BeautifulSoup
tree = BeautifulSoup(bad_html)
good_html = tree.prettify()