我正在尝试从第一个搜索省份内的网站 https://www.lianjia.com/city/ 获得链接。从第一个省开始,我想获得属于该省的城市的链接,我找到所有带有 href 链接的 li 标签,print(t)
,但是当我尝试通过t.get('href')
提取链接时,它什么也没有返回,以下代码有什么问题,有人可以帮忙吗?
url1 = 'https://www.lianjia.com/city/'
req1 = requests.get(url1)
soup1 = BeautifulSoup(req1.content, 'html.parser')
part = soup1.findAll("div",{"class":"city_province"})
for t in part[0].find_all('li'):
print(t)
print(t.get('href'))
答案 0 :(得分:0)
li
标签没有href
属性。您必须获得所有锚点才能获得href
。
尝试一下:
import requests
from bs4 import BeautifulSoup
soup = BeautifulSoup(requests.get('https://www.lianjia.com/city/').content, 'html.parser')
provinces = soup.find_all("div", {"class": "city_province"})
anchors = [[a["href"] for a in p.find_all("a")] for p in provinces]
for province_urls in anchors:
print(province_urls)
输出:
['https://aq.lianjia.com/', 'https://cz.fang.lianjia.com/', 'https://hf.lianjia.com/', 'https://mas.lianjia.com/', 'https://wuhu.lianjia.com/']
['https://bj.lianjia.com/']
['https://cq.lianjia.com/']
['https://fz.lianjia.com/', 'https://quanzhou.lianjia.com/', 'https://xm.lianjia.com/', 'https://zhangzhou.lianjia.com/']
and so on...