网络抓取多个相似页面

时间:2020-04-12 01:05:50

标签: python web-scraping beautifulsoup

我是python web抓取的新手,我试图获取加拿大不同winmar地点的地址,并将结果放入csv文件中。到目前为止,我发现区分不同位置站点的唯一方法是通过地址末尾的代码(数字)。问题在于结果不会随着程序的运行而改变,而是在打印并输出到csv文件时产生第一个位置的结果(305)。感谢您的时间和考虑!

这是我的代码:

import csv
import requests
from bs4 import BeautifulSoup

x = 0
numbers = ['305', '405', '306', '307', '308', '309', '4273']

f = csv.writer(open('Winmar_locations.csv', 'w'))
f.writerow(['City:', 'Address:'])

for links in numbers:

    for x in range(0, 6):
        url = 'https://www.winmar.ca/find-a-location/' + str(numbers[x])
        r = requests.get(url)
        soup = BeautifulSoup(r.content, "html.parser")

    location_name = soup.find("div", attrs={"class": "title_block"})
    location_name_items = location_name.find_all('h2')

    location_list = soup.find(class_='quick_info')
    location_list_items = location_list.find_all('p')

    for name in location_name_items:
        names = name.text
        names = names.replace('Location | ', '')

    for location in location_list_items:
        locations = location.text.strip()
        locations = locations.replace('24 Hour Emergency | (902) 679-1116','')

    print(names, locations)
    x = x+1

    f.writerow([names, locations])

2 个答案:

答案 0 :(得分:2)

您的代码中有几处错误,而您正在抓取的网站中有一件事

  • 像这样的https://www.winmar.ca/find-a-location/308第一次访问URL不会正确更改位置,它必须像https://www.winmar.ca/find-a-location/#308那样,在数字前加上hashbang。

  • 该网站具有重复的html,它们具有相同的类,这意味着您几乎一直都在加载所有位置,并且他们只是从js代码中选择要显示的位置-糟糕的课程-,这会使您的匹配者始终获得相同的位置,这说明了为什么总是重复相同的位置。

  • 最后,您有很多不必要的循环,只需要循环遍历numbers数组即可。

这是您代码的修改版本

import csv
import requests
from bs4 import BeautifulSoup

x = 0
numbers = ['305', '405', '306', '307', '308', '309', '4273']


names = []
locations = []
for x in range(0, 6):
    url = 'https://www.winmar.ca/find-a-location/#' + str(numbers[x])
    print(f"pinging url {url}")

    r = requests.get(url)
    soup = BeautifulSoup(r.content, "html.parser")
    scope = soup.find(attrs={"data-id": str(numbers[x])})

    location_name = scope.find("div", attrs={"class": "title_block"})
    location_name_items = location_name.find_all('h2')


    location_list = scope.find(class_='quick_info')
    location_list_items = location_list.find_all('p')

    name = location_name.find_all("h2")[0].text
    print(name)

    names.append(name)

    for location in location_list_items:
        loc = location.text.strip()
        if '24 Hour Emergency' in loc: 
            continue
        print(loc)
        locations.append(loc)

    x = x+1

注意我的作用域

    scope = soup.find(attrs={"data-id": str(numbers[x])})

这使您的代码不受html中加载的位置的影响,只需将作用域定位在所需的位置即可。

结果为:

pinging url https://www.winmar.ca/find-a-location/#305
Location | Annapolis
70 Donald E Hiltz Connector Road
Kentville, NS
B4N 3V7
pinging url https://www.winmar.ca/find-a-location/#405
Location | Bridgewater
15585 Highway # 3
Hebbville, NS
B4V 6X7
pinging url https://www.winmar.ca/find-a-location/#306
Location | Halifax
9 Isnor Dr
Dartmouth, NS
B3B 1M1
pinging url https://www.winmar.ca/find-a-location/#307
Location | New Glasgow
5074 Hwy. #4, RR #1
Westville, NS
B0K 2A0
pinging url https://www.winmar.ca/find-a-location/#308
Location | Port Hawkesbury
8 Industrial Park Rd
Lennox Passage, NS
B0E 1V0
pinging url https://www.winmar.ca/find-a-location/#309
Location | Sydney
358 Keltic Drive
Sydney River, NS
B1R 1V7

答案 1 :(得分:1)

尽管您有一个合格的答案,但我认为还是可以提出。我试图使脚本简洁,避免冗长。确保您的bs4版本为4.7.0或更高版本,以使其支持我在脚本中定义的pseudo selector来定位地址。

import csv
import requests
from bs4 import BeautifulSoup

base = 'https://www.winmar.ca/find-a-location/#{}'

numbers = ['305', '405', '306', '307', '308', '309', '4273']

with open("Winmar_locations.csv","w",newline="") as f:
    writer = csv.writer(f)
    writer.writerow(['City','Address'])

    while numbers:
        num = numbers.pop(0)
        r = requests.get(base.format(num))
        soup = BeautifulSoup(r.content,"html.parser")

        location_name = soup.select_one(f"[data-id='{num}'] .title_block > h2.title").contents[-1]
        location_address = soup.select_one(f"[data-id='{num}'] .heading:contains('Address') + p").get_text(strip=True)
        writer.writerow([location_name,location_address])
        print(location_name,location_address)