尝试从所有页面获取链接

时间:2019-11-12 06:53:34

标签: python-3.x beautifulsoup

我是python的新手,试图学习一些Web抓取概念,我试图从网站的成员那里获取链接,但我遇到了错误。

import os
import requests
import sys
import time
from bs4 import BeautifulSoup

r = requests.get('https://www.medhos.in/consult/chennai/general-practitioner')
soup = BeautifulSoup(r.text,'lxml')

for page_no in range(1,2):
data = {        
    'keyvalue': '8vzODRdlADTr6AAN',
    'pageSize': 20,
    'pageNumber': page_no        
}
page = requests.post('https://www.medhos.in/filter/BasicFilterSearch',data=data)
soup1 = BeautifulSoup(page.text,'html.parser')

for data1 in soup1.find_all('div',class_='tg-directpost doctors ng-scope'):
    name = data1.find('h4',class_='over_hid en_font18')
    link = name.find('a')
    print(link['href'])

0 个答案:

没有答案