Python和BS4分页循环

时间:2019-07-09 18:20:51

标签: python-3.x beautifulsoup

我不熟悉Web抓取,并且尝试在此页面https://www.metrocuadrado.com/bogota上进行。

想法是提取所有信息。到目前为止,我只能用一页纸完成它,但是我不知道如何使用分页。有什么办法可以根据我已有的代码来做到这一点?

from urllib.request import urlopen as uReq
from bs4 import BeautifulSoup as soup

# opening up connection, grabbing html
my_url = 'https://www.metrocuadrado.com/bogota'
uClient = uReq(my_url)
page_html = uClient.read()
uClient.close()

# html parser
page_soup = soup(page_html, "html.parser")


# grabs each product
containers = page_soup.findAll("div",{"class":"detail_wrap"})

filename = "metrocuadrado.csv"
f = open(filename, "w")

headers= "propertytype, businestype, cityname, neighborhood, description, price, area\n"

f.write(headers)


for container in containers:
    property_type = container[propertytype]
    busines_type = container[businestype]
    city_name = container[cityname]
    neighborhood_location = container[neighborhood]
    description = container.div.a.img["alt"]

    price_container = container.findAll("span",{"itemprop":"price"})
    price =  price_container[0].text

    area_container = container.findAll("div",{"class":"m2"})
    area = area_container[0].p.span.text

    print("property_type: " + property_type)
    print("busines_type: " + busines_type)
    print("city_name: " + city_name)
    print("neighborhood_location: " + neighborhood_location)
    print("description: " + description)
    print("price: " + price)
    print("area: " + area)

f.write(property_type + "," + busines_type + "," + city_name + "," + neighborhood_location + "," + description.replace(",", "|") + "," + price + "," + area + "\n")

f.close()

1 个答案:

答案 0 :(得分:0)

您将需要抓取每个页面(可能是在一个循环中),通过弄清楚要获取第2页,第3页等的调用来进行此操作。您可以通过查看页面源代码来弄清楚这一点。代码或使用浏览器中的开发人员工具并查看网络调用。