我被网络爬虫困住了

时间:2020-09-26 22:39:17

标签: python python-3.x web-scraping

我正在做一个网络抓取程序,当我尝试获取一页数据时,它会不断加载相同的信息。

from urllib.request import urlopen as uReq
from bs4 import BeautifulSoup as soup 

my_url = 'https://www.realtor.com/realestateagents/phoenix_az'

#opening up connection, grabbing the page
uClient = uReq(my_url)
#read page 
page_html = uClient.read()
#close page
uClient.close()

#html parsing
page_soup = soup(page_html, "html.parser")

#finds all realtors on page 
containers = page_soup.findAll("div",{"class":"agent-list-card clearfix"})

for container in containers:
    name = page_soup.find('div', class_='agent-name text-bold')
    agent_name = name.text.strip()

    number = page_soup.find('div', class_='agent-phone hidden-xs hidden-xxs')
    agent_number = number.text.strip()

    print("name: " + agent_name)
    print("number: " + agent_number)

1 个答案:

答案 0 :(得分:0)

解决方案是在循环内的container中搜索而不是page_soup

另外,您应该检查是否有结果或捕获引发的异常。