使用 for 循环从多个页面抓取网页第 2 部分

时间:2020-12-21 18:23:47

标签: python html for-loop web-scraping

我原来的问题:

“我创建了网络抓取工具,用于从上市房屋中挑选数据。

我在更改页面时遇到问题。我确实让 for 循环从 1 到某个数字。

问题是:在这个网页中,最后一个“页面”可能一直不同。现在是 70,但明天可能是 68 或 72。如果我的范围是 (1-74),它将多次打印最后一页,因为如果超过最大值,页面总是加载最后一页。”

然后我得到了 Ricco D 的帮助,他编写了知道何时停止的代码:

import requests
from bs4 import BeautifulSoup as bs

url='https://www.etuovi.com/myytavat-asunnot/oulu?haku=M1582971026&sivu=1000'
page=requests.get(url)
soup = bs(page.content,'html.parser')

last_page = None
pages = []

buttons=soup.find_all('button', class_= "Pagination__button__3H2wX")
for button in buttons:
    pages.append(button.text)

print(pages)

这很好用。

当我尝试将它与我的原始代码结合起来时,我遇到了错误:

Traceback (most recent call last):
  File "C:/Users/Käyttäjä/PycharmProjects/Etuoviscaper/etuovi.py", line 29, in <module>
    containers = page_soup.find("div", {"class": "ListPage__cardContainer__39dKQ"})
  File "C:\Users\Käyttäjä\PycharmProjects\Etuoviscaper\venv\lib\site-packages\bs4\element.py", line 2173, in __getattr__
    raise AttributeError(
AttributeError: ResultSet object has no attribute 'find'. You're probably treating a list of elements like a single element. Did you call find_all() when you meant to call find()?

这是我得到的错误。

任何想法如何获得这项工作?谢谢

import bs4
from bs4 import BeautifulSoup as soup
from urllib.request import urlopen as uReq
import re
import requests

my_url = 'https://www.etuovi.com/myytavat-asunnot/oulu?haku=M1582971026&sivu=1'

filename = "asunnot.csv"
f = open(filename, "w")
headers = "Neliöt; Hinta; Osoite; Kaupunginosa; Kaupunki; Huoneistoselitelmä; Rakennusvuosi\n"
f.write(headers)

page = requests.get(my_url)
soup = soup(page.content, 'html.parser')

pages = []
buttons = soup.findAll("button", {"class": "Pagination__button__3H2wX"})
for button in buttons:
    pages.append(button.text)


last_page = int(pages[-1])

for sivu in range(1, last_page):

    req = requests.get(my_url + str(sivu))
    page_soup = soup(req.text, "html.parser")
    containers = page_soup.findAll("div", {"class": "ListPage__cardContainer__39dKQ"})

    for container in containers:
        size_list = container.find("div", {"class": "flexboxgrid__col-xs__26GXk flexboxgrid__col-md-4__2DYW-"}).text
        size_number = re.findall("\d+\,*\d+", size_list)
        size = ''.join(size_number)  # Asunnon koko neliöinä

        prize_line = container.find("div", {"class": "flexboxgrid__col-xs-5__1-5sb flexboxgrid__col-md-4__2DYW-"}).text
        prize_number_list = re.findall("\d+\d+", prize_line)
        prize = ''.join(prize_number_list[:2])  # Asunnon hinta

        address_city = container.h4.text

        address_list = address_city.split(', ')[0:1]
        address = ' '.join(address_list)  # osoite

        city_part = address_city.split(', ')[-2]  # kaupunginosa

        city = address_city.split(', ')[-1]  # kaupunki

        type_org = container.h5.text
        type = type_org.replace("|", "").replace(",", "").replace(".", "")  # asuntotyyppi

        year_list = container.find("div", {"class": "flexboxgrid__col-xs-3__3Kf8r flexboxgrid__col-md-4__2DYW-"}).text
        year_number = re.findall("\d+", year_list)
        year = ' '.join(year_number)

        print("pinta-ala: " + size)
        print("hinta: " + prize)
        print("osoite: " + address)
        print("kaupunginosa: " + city_part)
        print("kaupunki: " + city)
        print("huoneistoselittelmä: " + type)
        print("rakennusvuosi: " + year)

        f.write(size + ";" + prize + ";" + address + ";" + city_part + ";" + city + ";" + type + ";" + year + "\n")

f.close()

2 个答案:

答案 0 :(得分:1)

您的主要问题与您使用 ... Caused by: org.springframework.beans.ConversionNotSupportedException: Failed to convert value of type 'java.lang.String' to required type 'java.time.Duration'; nested exception is java.lang.IllegalStateException: Cannot convert value of type 'java.lang.String' to required type 'java.time.Duration': no matching editors or conversion strategy found at org.springframework.beans.TypeConverterSupport.convertIfNecessary(TypeConverterSupport.java:76) ... 85 more Caused by: java.lang.IllegalStateException: Cannot convert value of type 'java.lang.String' to required type 'java.time.Duration': no matching editors or conversion strategy found at org.springframework.beans.TypeConverterDelegate.convertIfNecessary(TypeConverterDelegate.java:262) at org.springframework.beans.TypeConverterSupport.convertIfNecessary(TypeConverterSupport.java:73) ... 89 more 的方式有关。您首先导入 soup - 然后在创建第一个 BeautifulSoup 实例时覆盖此名称:

BeautifulSoup as soup

从此时起,soup = soup(page.content, 'html.parser') 将不再是名称库 soup,而是您刚刚创建的对象。因此,当您进一步尝试创建新实例 (BeautifulSoup) 时,这将失败,因为 page_soup = soup(req.text, "html.parser") 不再引用 soup

所以最好的办法是像这样正确导入库:BeautifulSoup(或者像 from bs4 import BeautifulSoup 那样导入并使用它 - 就像 Ricco D 那样),然后像这样更改两个实例化行:

bs

soup = BeautifulSoup(page.content, 'html.parser') # this is Python2.7-syntax btw

如果您使用的是 Python3,正确的 page_soup = BeautifulSoup(req.text, "html.parser") # this is Python3-syntax btw-语法将由 requests 而不是 page.text 作为 page.content 在 Python3 中返回 .content,即不是你想要的(因为 BeautifulSoup 需要一个 bytes)。如果您使用的是 Python2.7,您可能应该将 str 更改为 req.text

祝你好运。

答案 1 :(得分:0)

使用 class name 查找元素似乎不是最好的主意..因为这个。所有下一个元素的类名相同。

Same class name for multiple divs

由于语言的原因,我不知道您在寻找什么。我建议..你去网站>按f12>按ctrl+f>输入xpath..看看你得到了什么元素。如果你不知道xpaths,请阅读这个。 https://blog.scrapinghub.com/2016/10/27/an-introduction-to-xpath-with-examples

相关问题