如何使用python beautifulsoup提取链接并一次又一次地处理页面

时间:2019-03-04 12:47:09

标签: python-3.x beautifulsoup

调整以提取链接并希望处理加载。但甚至没有链接。 代码:

from bs4 import BeautifulSoup

import requests

r = requests.get('http://www.indiabusinessguide.in/business-categories/agriculture/agricultural-equipment.html')

soup = BeautifulSoup(r.text,'lxml')

links = soup.find_all('a',class_='link_orange')

for link in links:
    print(link['href'])

请帮助我处理链接的加载和提取。

1 个答案:

答案 0 :(得分:0)

尝试使用lxml库。通过使用请求将请求发布到url来接收响应。

import requests
import lxml
from lxml import html

contact_list = []
def scrape(url, pages):

  for page in range(1, pages):
    headers = {
      "Content-Type": "application/x-www-form-urlencoded; charset=UTF-8",
      "User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/72.0.3626.109 Safari/537.36",
      "X-Requested-With": "XMLHttpRequest",
      "Cookie": "PHPSESSID=2q0tk3fi1kid0gbdfboh94ed56",
    }

    data = {
      "page": f"{page}"
    }

    r = requests.post(url, headers=headers, data=data)
    tree = html.fromstring(r.content)

    links= tree.xpath('//a[@class="link_orange"]')
    for link in links:
      # print(link.get('href'))
      contact_list.append(link.get('href'))


url = "http://www.indiabusinessguide.in/ajax_advertiselist.php"

scrape(url, 10)
print(contact_list)
print(len(contact_list))