我有一个看起来像这样的SQL表,但是有数十万行:
+------+---------------------+-----------+------------+--------------+----------+
| id | timestamp | lat | lon | country_code | city |
+------+---------------------+-----------+------------+--------------+----------+
| 2231 | 2013-09-22 14:58:32 | 28.179199 | 113.113602 | CN | Changsha |
| 2232 | 2013-09-22 14:58:32 | 28.179199 | 113.113602 | CN | Changsha |
| 2233 | 2013-09-22 14:58:32 | 41.792198 | 123.432800 | CN | Shenyang |
| 2234 | 2013-09-22 14:58:32 | 31.045601 | 121.399696 | CN | Shanghai |
| 2235 | 2013-09-22 14:58:32 | 45.750000 | 126.650002 | CN | Harbin |
| 2236 | 2013-09-22 14:58:32 | 39.928902 | 116.388298 | CN | Beijing |
| 2237 | 2013-09-22 14:58:32 | 26.061399 | 119.306099 | CN | Fuzhou |
| 2238 | 2013-09-22 14:58:32 | 26.583300 | 106.716698 | CN | Guiyang |
| 2239 | 2013-09-22 14:58:32 | 39.928902 | 116.388298 | CN | Beijing |
| 2240 | 2013-09-22 14:58:32 | 31.045601 | 121.399696 | CN | Shanghai |
+------+---------------------+-----------+------------+--------------+----------+
我需要根据时间戳(间隔)进行查询并获取适合该时间间隔的所有记录,计算具有相同城市的行(并添加任何项目的纬度/经度,它们将完全相同同一组)。目前我的应用程序代码中只有一个普通的select
和组(如下所示),但这很慢,因为它需要向应用程序发送几百kb。
(聚合的python代码)
sorted_events = sorted(result, key=itemgetter('city'), reverse=False)
for k, g in groupby(sorted_events, key=itemgetter('city')):
group = list(g)
first_item = group[0]
unique_city_item = {
"city" : first_item['city'],
"country_code" : first_item['cc'],
"lon" : first_item['lon'],
"lat" : first_item['lat'],
"number_of_items" : len(group)
}
它按照我想要的方式工作,但速度很慢。有没有办法直接用SQL查询执行此操作?我得到以下JSON输出,我想要类似的东西:
{
{
city: "Baotou",
lon: 109.822197,
country_code: "CN",
lat: 40.652199,
number_of_items: 288
},
{
city: "Beijing,",
lon: 116.388298,
country_code: "CN",
lat: 39.928902,
number_of_items: 47
}
}
答案 0 :(得分:2)
这是你在找什么?
select city, lon, country_code, lat, count(*) as number_of_items
from table t
where timestamp between STARTTIMESTAMP and ENDTIMESTAMP
group by city, lon, country_code, lat;