Google Maps API无法检索所有商店结果

时间:2014-11-17 15:00:31

标签: python api google-maps

我写了一个快速脚本来获取所有商店(无论如何都尝试这样做)提供了一个邮政编码,它看起来像这样:

from googleplaces import GooglePlaces, types, lang

API_KEY = "MYKEY"

google_places = GooglePlaces(API_KEY)


query_result = google_places.nearby_search(location="94563", keyword="store", radius=50000)

if query_result.has_attributions:
    print query_result.html_attributions

for place in query_result.places:
    print place.name

这些是我得到的结果:

Apple Store
Stonestown Galleria
Lawrence Hall of Science
Fentons Creamery
Nordstrom
The North Face
Amoeba Music
Safeway
Rockridge Market Hall
City Beer Store
Best Buy
City Lights Booksellers & Publishers
Macy's
Barnes & Noble
Rainbow Grocery
Target
Urban Outfitters
The UPS Store
AT&T
Marshalls

但如果我们转到maps.google.com,我们可以查询相同的商店,这就是我们得到的:

stores sample

我们注意到此结果集中有许多商店未从API中查询。不知道我做错了什么。

1 个答案:

答案 0 :(得分:2)

nearby search的结果超过20时,API也会返回next_page_token,并且必须单独调用才能检索它们[Reference]。我不知道你正在使用的googleplaces包是否能够做到这一点,但这很可能是你获得20个结果的原因;其余的都在那里,你只需要再次调用API来获取它们。

我的建议是弃用软件包,而是直接处理Google的API。这里有一些帮助您开始这样做的帮助代码。如果您还没有下载并安装geopy,则需要下载并安装。

import json
import urllib
import time
from geopy.geocoders import Nominatim
geolocator = Nominatim()
l = geolocator.geocode('94563') #enter the zip code you're interested in to get lat/long coords


longitude = l.longitude
latitude = l.latitude

resultslist = []
url = 'https://maps.googleapis.com/maps/api/place/nearbysearch/json?location='+str(latitude)+','+str(longitude)+'&radius=50000&types=store&key=<put your key here>' #construct URL, make sure to add your key without the <>
count = 0
ps= json.loads(urllib.urlopen(url).read())
for i in ps['results']:
    #parse results here
    resultslist.append(i)
    count += 1
if ps['next_page_token']: 
    while True:
        time.sleep(2)
        npt = ps['next_page_token']

        url = 'https://maps.googleapis.com/maps/api/place/nearbysearch/json?location='+str(latitude)+','+str(longitude)+'&radius=50000&types=store&key=<yourkey>&pagetoken='+str(npt)

        ps= json.loads(urllib.urlopen(url).read())

        for i in ps['results']:
            resultslist.append(i)
            #parse results here
            count += 1

        try:
            ps['next_page_token']
        except:
            break

print 'results returned:',count