我是使用Python进行Web爬网的绝对入门者,对Python编程了解甚少。我只是想提取田纳西州律师的信息。在该网页中,有多个链接,在其中有指向律师类别的进一步链接,并且在其中是律师的详细信息。
我已经将各个城市的链接提取到一个列表中,并且还提取了每个城市链接中可用的各种律师。概要文件链接也已获取并存储为一组。现在,我试图获取每个律师的姓名,地址,公司名称和执业范围,并将其存储为.xls文件。
import requests
from bs4 import BeautifulSoup as bs
import pandas as pd
final=[]
records=[]
with requests.Session() as s:
res = s.get('https://attorneys.superlawyers.com/tennessee/', headers = {'User-agent': 'Super Bot 9000'})
soup = bs(res.content, 'lxml')
cities = [item['href'] for item in soup.select('#browse_view a')]
for c in cities:
r=s.get(c)
s1=bs(r.content,'lxml')
categories = [item['href'] for item in s1.select('.three_browse_columns:nth-of-type(2) a')]
for c1 in categories:
r1=s.get(c1)
s2=bs(r1.content,'lxml')
lawyers = [item['href'].split('*')[1] if '*' in item['href'] else item['href'] for item in
s2.select('.indigo_text .directory_profile')]
final.append(lawyers)
final_list={item for sublist in final for item in sublist}
for i in final_list:
r2 = s.get(i)
s3 = bs(r2.content, 'lxml')
name = s3.find('h2').text.strip()
add = s3.find("div").text.strip()
f_name = s3.find("a").text.strip()
p_area = s3.find('ul',{"class":"basic_profile aag_data_value"}).find('li').text.strip()
records.append({'Names': name, 'Address': add, 'Firm Name': f_name,'Practice Area':p_area})
df = pd.DataFrame(records,columns=['Names','Address','Firm Name','Practice Areas'])
df=df.drop_duplicates()
df.to_excel(r'C:\Users\laptop\Desktop\lawyers.xls', sheet_name='MyData2', index = False, header=True)
我希望得到一个.xls文件,但是在执行过程中什么也没有返回。直到我强制停止并且没有创建.xls文件,它才会终止。
答案 0 :(得分:0)
您需要通过访问每个律师的页面并使用适当的选择器来提取这些详细信息。像这样:
import requests
from bs4 import BeautifulSoup as bs
import pandas as pd
records = []
final = []
with requests.Session() as s:
res = s.get('https://attorneys.superlawyers.com/tennessee/', headers = {'User-agent': 'Super Bot 9000'})
soup = bs(res.content, 'lxml')
cities = [item['href'] for item in soup.select('#browse_view a')]
for c in cities:
r = s.get(c)
s1 = bs(r.content,'lxml')
categories = [item['href'] for item in s1.select('.three_browse_columns:nth-of-type(2) a')]
for c1 in categories:
r1 = s.get(c1)
s2 = bs(r1.content,'lxml')
lawyers = [item['href'].split('*')[1] if '*' in item['href'] else item['href'] for item in s2.select('.indigo_text .directory_profile')]
final.append(lawyers)
final_list = {item for sublist in final for item in sublist}
for link in final_list:
r = s.get(link)
soup = bs(r.content, 'lxml')
name = soup.select_one('#lawyer_name').text
firm = soup.select_one('#firm_profile_page').text
address = ' '.join([string for string in soup.select_one('#poap_postal_addr_block').stripped_strings][1:])
practices = ' '.join([item.text for item in soup.select('#pa_list li')])
row = [name, firm, address, practices]
records.append(row)
df = pd.DataFrame(records, columns = ['Name', 'Firm', 'Address', 'Practices'])
print(df)
df.to_csv(r'C:\Users\User\Desktop\Lawyers.csv', sep=',', encoding='utf-8-sig',index = False )