搜寻Google目的地

时间:2018-10-27 15:10:36

标签: python web-scraping

我正在准备环游世界,很想知道世界上最热门的景点,所以我试图将某个特定地点的主要目的地抓起来。我想最后列出一个国家的顶级景点和他们的最佳景点。最近添加了Google目的地,这是一项很棒的功能。

例如,在谷歌搜索Cuba Destinations时,Google会显示一张目的地为哈瓦那,巴拉德罗,特立尼达和古巴圣地亚哥的卡。

然后,在谷歌搜索Havana Cuba Destinations时,它会显示“哈瓦那旧城,马勒贡,卡斯蒂略·德·洛斯·特雷斯·雷耶斯·马格斯·德尔·莫罗,埃尔卡皮托利奥。

最后,我将它变成一张桌子,看起来像:

Cuba, Havana, Old Havana.
Cuba, Havana, Malecon.
Cuba, Havana, Castillo de los Tres Reyes Magos del Morro.
Cuba, Havana, El Capitolio.
Cuba, Varadero, Hicacos Peninsula.

以此类推。

我已经尝试过Travel destinations API中所示的API调用,但是这种调用无法提供正确的反馈,并且经常会产生OVER_QUERY_LIMIT。

下面的代码返回错误:

URL = "https://www.google.nl/destination/compare?q=cuba+destinations&site=search&output=search&dest_mid=/m/0d04z6&sa=X&ved=0API_KEY"

import requests 
from bs4 import BeautifulSoup 

#URL = "http://www.values.com/inspirational-quotes"
r = requests.get(URL) 

soup = BeautifulSoup(r.content, 'html5lib') 
print(soup.prettify()) 

有什么提示吗?

2 个答案:

答案 0 :(得分:0)

尝试使用此Google Places API URL。您将获得(例如)纽约市的景点/景点/旅游景点。您必须将“城市名称”与关键字“兴趣点”一起使用。

https://maps.googleapis.com/maps/api/place/textsearch/json?query=new+york+city+point+of+interest&language=en&key=API_KEY

这些API结果与下面的Google搜索结果相同。 https://www.google.com/search?sclient=psy-ab&site=&source=hp&btnG=Search&q=New+York+point+of+interest

还有两个提示给您:

  • 您可以将Python客户端用于Google Maps Services:https://github.com/googlemaps/google-maps-services-python
  • 对于OVER_QUERY_LIMIT问题,请确保您为Google Cloud项目添加了一种计费方式(使用信用卡或免费跟踪信用余额)。不必担心,因为Google每月都会为您提供数千次免费查询。

答案 1 :(得分:0)

您将需要使用Selenium之类的东西,因为该页面产生多个XHR,您将无法仅使用请求来获取呈现的页面。首先安装Selenium。

sudo pip3 install selenium

然后得到一个驱动程序https://sites.google.com/a/chromium.org/chromedriver/downloads (根据您的操作系统,您可能需要指定驱动程序的位置)

from bs4 import BeautifulSoup
from selenium import webdriver
import time
browser = webdriver.Chrome()
url = ("https://www.google.nl/destination/compare?q=cuba+destinations&site=search&output=search&dest_mid=/m/0d04z6&sa=X&ved=0API_KEY")
browser.get(url)
time.sleep (2)
html_source = browser.page_source
browser.quit()

soup = BeautifulSoup(html_source, "lxml")
# Get the headings
hs = [tag.text for tag in soup.find_all('h2')]
# get the text containg divs
divs = [tag.text for tag in soup.find_all('div', {'class': False})]
# Delete surplus divs
del divs[:22]
del divs[-1:]

print(list(zip(hs,divs)))

输出:

[('Havana', "Cuban capital known for Old Havana's colonial architecture, live salsa music & nearby beaches."), ('Varadero', 'Major Cuban resort town on Hicacos Peninsula, with a 20km beach, a golf course & several parks.'), ('Trinidad', 'Cuban town known for Plaza Mayor, colonial architecture & plantations of Valle de los Ingenios.'), ('Santiago de Cuba', 'Cuban city known for Afro-Cuban festivals & music, plus Spanish colonial & revolutionary history.'), ('Viñales', 'Cuban town known for Viñales Valley, Casa de Caridad Botanical Gardens & nearby tobacco farms.'), ('Cienfuegos', 'Cuban coastal city, known for Tomás Terry Theater, Arco de Triunfo & Playa Rancho Luna resorts.'), ('Santa Clara', 'Cuban city home to the Che Guevara Mausoleum, Parque Vidal & ornate Teatro La Caridad.'), ('Cayo Coco', 'Cuban island known for its white-sand beaches & resorts, plus reef snorkeling & flamingos.'), ('Cayo Santa María', 'Cuban island known for Gaviotas Beach, Cayo Santa María Wildlife Refuge & Pueblo La Estrella.'), ('Cayo Largo del Sur', 'Cuban island, known for beaches like Playa Blanca & Playa Sirena, plus a sea turtle center & diving.'), ('Plaza de la Revolución', 'Che Guevara and monuments'), ('Camagüey', 'Ballet, churches, history, and beaches'), ('Holguín', 'Cuban city known for Parque Calixto García, the Hacha de Holguín axe head & Guardalavaca beaches.'), ('Cayo Guillermo', 'Cuban island with beaches like Playa del Medio & Playa Pilar, plus vast expanses of coral reef.'), ('Matanzas', 'Caves, theater, beaches, history, and rivers'), ('Baracoa', 'Beaches, rivers, and nature'), ('Centro Habana', '\xa0'), ('Playa Girón', 'Beaches, snorkeling, and museums'), ('Topes de Collantes', 'Scenic nature reserve park for hiking'), ('Guardalavaca', 'Cuban resort known for Esmeralda Beach, the Cayo Naranjo Aquarium & the Chorro de Maíta Museum.'), ('Bay of Pigs', 'Snorkeling, scuba diving, and beaches'), ('Isla de la Juventud', 'Scuba diving and beaches'), ('Zapata Swamp', 'Parks, crocodiles, birdwatching, and swamps'), ('Pinar del Río', 'History'), ('Remedios', 'Churches, beaches, and museums'), ('Bayamo', 'Wax museums, monuments, history, and music'), ('Sierra Maestra', 'Peaks with a storied political history'), ('Las Terrazas', 'Zip-lining, nature reserves, and hiking'), ('Sancti Spíritus', 'History and museums'), ('Playa Ancon', 'Beaches, snorkeling, and scuba diving'), ('Jibacoa', 'Beaches, snorkeling, and jellyfish'), ('Jardines de la Reina', 'Scuba diving, fly-fishing, and gardens'), ('Cayo Jutías', 'Beach and snorkeling'), ('Guamá, Cuba', 'Crocodiles, beaches, snorkeling, and lakes'), ('Morón', 'Crocodiles, lagoons, and beaches'), ('Las Tunas', 'Beaches, nightlife, and history'), ('Soroa', 'Waterfalls, gardens, nature, and ecotourism'), ('Guanabo', 'Beach'), ('María la Gorda', 'Scuba diving, beaches, and snorkeling'), ('Alejandro de Humboldt National Park', 'Park, protected area, and hiking'), ('Ciego de Ávila', 'Zoos and beaches'), ('Bacunayagua', '\xa0'), ('Guantánamo', 'Beaches, history, and nature'), ('Cárdenas', 'Beaches, museums, monuments, and history'), ('Canarreos Archipelago', 'Sailing and coral reefs'), ('Caibarién', 'Beaches'), ('El Nicho', 'Waterfalls, parks, and nature'), ('San Luis Valley', 'Cranes, national wildlife refuge, and elk')]

已更新以回应评论:

from bs4 import BeautifulSoup
from selenium import webdriver
import time

browser = webdriver.Chrome()
for place in ["Cuba", "Belgum", "France"]:
    url = ("https://www.google.nl/destination/compare?site=destination&output=search")
    browser.get(url) # you may not need to do this every time if you clear the search box
    time.sleep(2)
    element = browser.find_element_by_name('q') # get the query box
    time.sleep(2)
    element.send_keys(place) # populate the search box
    time.sleep (2)
    search_box=browser.find_element_by_class_name('sbsb_c') # get the first element in the list
    search_box.click() # click it
    time.sleep (2)
    destinations=browser.find_element_by_id('DESTINATIONS') # Click the destinations link
    destinations.click()
    time.sleep (2)
    html_source = browser.page_source
    soup = BeautifulSoup(html_source, "lxml")
    # Get the headings
    hs = [tag.text for tag in soup.find_all('h2')]
    # get the text containg divs
    divs = [tag.text for tag in soup.find_all('div', {'class': False})]
    # Delete surplus divs
    del divs[:22]
    del divs[-1:]
    print(list(zip(hs,divs)))

browser.quit()