从网站抓取多个页面(BeautifulSoup,Requests,Python3)

时间:2016-04-25 19:06:45

标签: python-3.x request beautifulsoup web-crawler

我想知道如何使用漂亮的汤/请求从一个网站抓取多个不同的页面,而不必一遍又一遍地重复我的代码。

以下是我目前的代码,它正在爬行某些城市的旅游景点:

RegionIDArray = [187147,187323,186338]
dict = {187147: 'Paris', 187323: 'Berlin', 186338: 'London'}
already_printed = set()

for reg in RegionIDArray:
    for page in range(1,700,30):
        r = requests.get("https://www.tripadvisor.de/Attractions-c47-g" + str(reg) + "-oa" + str(page) + ".html")

        g_data = soup.find_all("div", {"class": "element_wrap"})

        for item in g_data:
            header = item.find_all("div", {"class": "property_title"})
            item = (header[0].text.strip())
            if item not in already_printed:
                already_printed.add(item)

                print("POI: " + str(item) + " | " + "Location: " + str(dict[reg]) + " | " + "Art: Museum ")

到目前为止,一切都按预期进行。在下一步中,除了旅游景点之外,我还想抓住这些城市最受欢迎的博物馆。

因此,我必须通过更改c参数来修改请求,以获得所有必需的博物馆:

r = requests.get("https://www.tripadvisor.de/Attractions-c" + str(museumIDArray) +"-g" + str(reg) + "-oa" + str(page) + ".html")

因此我的代码看起来像这样:

RegionIDArray = [187147,187323,186338]
museumIDArray = [47,49]
dict = {187147: 'Paris', 187323: 'Berlin', 186338: 'London'}
already_printed = set()

for reg in RegionIDArray:
    for page in range(1,700,30):
        r = requests.get("https://www.tripadvisor.de/Attractions-c" + str(museumIDArray) +"-g" + str(reg) + "-oa" + str(page) + ".html")
        soup = BeautifulSoup(r.content)

        g_data = soup.find_all("div", {"class": "element_wrap"})

        for item in g_data:
            header = item.find_all("div", {"class": "property_title"})
            item = (header[0].text.strip())
            if item not in already_printed:
                already_printed.add(item)

                print("POI: " + str(item) + " | " + "Location: " + str(dict[reg]) + " | " + "Art: Museum ")

这似乎并不完全正确。我得到的输出,不包括某些城市的所有博物馆和旅游景点。

任何人都可以帮助我吗?任何反馈都表示赞赏。

1 个答案:

答案 0 :(得分:1)

所有名称都在具有property_title类的div内的锚标签中。

for reg in RegionIDArray:
    for page in range(1,700,30):
        r = requests.get("https://www.tripadvisor.de/Attractions-c" + str(museumIDArray) +"-g" + str(reg) + "-oa" + str(page) + ".html")
        soup = BeautifulSoup(r.content)

        for item in (a.text for a in soup.select("div.property_title a")):
            if item not in already_printed:
                already_printed.add(item)
                print("POI: " + str(item) + " | " + "Location: " + str(dct[reg]) + " | " + "Art: Museum ")

从分页div获取链接也更好:

from bs4 import BeautifulSoup
import requests
from urllib.parse import  urljoin


RegionIDArray = [187147,187323,186338]
museumIDArray = [47,49]
dct = {187147: 'Paris', 187323: 'Berlin', 186338: 'London'}
already_printed = set()

def get_names(soup):
    for item in (a.text for a in soup.select("div.property_title a")):
        if item not in already_printed:
            already_printed.add(item)
            print("POI: {} | Location: {} | Art: Museum ".format(item, dct[reg]))

base = "https://www.tripadvisor.de"
for reg in RegionIDArray:
    r = requests.get("https://www.tripadvisor.de/Attractions-c[47,49]-g{}-oa.html".format(reg))
    soup = BeautifulSoup(r.content)

    # get links to all next pages.
    all_pages = (urljoin(base, a["href"]) for a in soup.select("div.unified.pagination a.pageNum.taLnk")[1:])
    # use helper function to print the names.
    get_names(soup)

    # visit all remaining pages.
    for url in all_pages:
        soup = BeautifulSoup(requests.get(url).content)
        get_names(soup)