在python中使用bs4从多个链接中获取律师详细信息

时间:2019-06-08 13:31:41

标签: python web-scraping beautifulsoup

我是使用Python进行Web爬网的绝对入门者,并且对Python编程了解甚少。我只是想提取田纳西州律师的信息。在网页中,有多个链接,其中还有更多关于律师类别的链接,而这些链接中还有律师的详细信息。

我已经将各个城市的链接提取到一个列表中,并且还提取了每个城市链接中可用的各种律师。现在,我尝试获取每个城市每个类别下律师的个人资料链接,从中可以检索律师的详细信息。但是返回空列表。可以做什么。可能的话。

import requests
from bs4 import BeautifulSoup as bs
import pandas as pd

res = requests.get('https://attorneys.superlawyers.com/tennessee/', headers = {'User-agent': 'Super Bot 9000'})
soup = bs(res.content, 'lxml')

cities = [item['href'] for item in soup.select('#browse_view a')]
for c in cities:
    r=requests.get(c)
    s1=bs(r.content,'lxml')
    categories = [item['href'] for item in s1.select('.three_browse_columns:nth-of-type(2) a')]
    #print(categories)
    for c1 in categories:
        r1=requests.get(c1)
        s2=bs(r1.content,'lxml')
        lawyers = [item['href'] for item in s2.select('.directory_profile a')]
        print(lawyers)

“我希望输出与每个类别的律师的个人档案的链接有关,但它返回的是空列表。”

[][][][][][][]

3 个答案:

答案 0 :(得分:2)

使用类选择器(这是您的第一个问题)时,您已经处于a标签级别。

我在下面使用了一个不同的选择器,并测试了一些伪装成同一律师的网址。我将其分隔为结尾URL,以便可以使用set删除重复项。

我使用Session来提高重用连接的效率。我将律师简介添加到列表中,并通过集合理解将列表扁平化,以删除所有重复项。

import requests
from bs4 import BeautifulSoup as bs

final = []
with requests.Session() as s:
    res = s.get('https://attorneys.superlawyers.com/tennessee/', headers = {'User-agent': 'Super Bot 9000'})
    soup = bs(res.content, 'lxml')
    cities = [item['href'] for item in soup.select('#browse_view a')]
    for c in cities:
        r = s.get(c)
        s1 = bs(r.content,'lxml')
        categories = [item['href'] for item in s1.select('.three_browse_columns:nth-of-type(2) a')]
        for c1 in categories:
            r1 = s.get(c1)
            s2 = bs(r1.content,'lxml')
            lawyers = [item['href'].split('*')[1] if '*' in item['href'] else item['href'] for item in s2.select('.indigo_text .directory_profile')]
            final.append(lawyers)
final_list = {item for sublist in final for item in sublist}

答案 1 :(得分:1)

来自另一个post

  

之所以发生这种情况,是因为您不能将nth-of-type()与分类标签一起使用,它只能在这样的标签上使用:table:nth-​​of-type(4)。

因此,您的import {jobTitles} from './reducers/jobTitles'; import {jobs} from './reducers/jobs'; import {persistStore, persistCombineReducers} from 'redux-persist'; import storage from 'redux-persist/es/storage'; export const ConfigureStore = () => { const config = { key: 'root', storage, debug: true }; const store = createStore( persistCombineReducers(config, { jobTitles, jobs //I added this line and it fixed the problem! }), applyMiddleware(thunk, logger) ); const persistor = persistStore(store); return {persistor, store}; } 变量将返回一个空列表。

解决方法在同一篇文章中给出:

categories

答案 2 :(得分:1)

我尝试了以下方法:

import requests
from bs4 import BeautifulSoup as bs
import pandas as pd

res = requests.get('https://attorneys.superlawyers.com/tennessee/', headers = {'User-agent': 'Super Bot 9000'})
soup = bs(res.content, 'lxml')

cities = [item['href'] for item in soup.select('#browse_view a')]
for c in cities:
    r=requests.get(c)
    s1=bs(r.content,'lxml')
    categories = [item['href'] for item in s1.select('.three_browse_columns:nth-of-type(2) a')]
    #print(categories)
    for c1 in categories:
        r1=requests.get(c1)
        s2=bs(r1.content,'lxml')
        lawyers = [item['href'] for item in s2.select('#lawyer_0_main a')]
        print(lawyers)

“它不仅打印个人资料链接,而且还打印不需要的About和其他关联链接。我只需要律师的个人资料链接。”

“输出显示为”

"`['https://profiles.superlawyers.com/tennessee/alamo/lawyer/jim-emison/c99a7c4f-3a42-4953-9260-3750f46ed4bd.html', 'https://www.superlawyers.com/about/selection_process.html']
['https://profiles.superlawyers.com/tennessee/alamo/lawyer/jim-emison/c99a7c4f-3a42-4953-9260-3750f46ed4bd.html', 'https://www.superlawyers.com/about/selection_process.html']
['https://profiles.superlawyers.com/tennessee/alamo/lawyer/jim-emison/c99a7c4f-3a42-4953-9260-3750f46ed4bd.html', 'https://www.superlawyers.com/about/selection_process.html']`"