大数据集计算的优化算法

时间:2015-01-10 23:48:40

标签: python csv optimization pandas gis

我再次发现自己被大熊猫困扰,以及如何最好地执行'矢量操作'。我的代码有效,但是需要很长时间才能遍历所有内容。

代码尝试执行的操作是循环shapes.cv并确定哪个shape_pt_sequencestop_id,然后将stop_latstop_lon分配给shape_pt_latshape_pt_lon,同时也将shape_pt_sequence标记为is_stop

的GIST

stop_times.csv LINK

trips.csv LINK

shapes.csv LINK

这是我的代码:

import pandas as pd
from haversine import *

'''
iterate through shapes and match stops along a shape_pt_sequence within
x amount of distance. for shape_pt_sequence that is closest, replace the stop
lat/lon to the shape_pt_lat/shape_pt_lon, and mark is_stop column with 1.
'''

# readability assignments for shapes.csv
shapes = pd.read_csv('csv/shapes.csv')
shapes_index = list(set(shapes['shape_id']))
shapes_index.sort(key=int)
shapes.set_index(['shape_id', 'shape_pt_sequence'], inplace=True)

# readability assignments for trips.csv
trips = pd.read_csv('csv/trips.csv')
trips_index = list(set(trips['trip_id']))
trips.set_index(['trip_id'], inplace=True)

# readability assignments for stops_times.csv
stop_times = pd.read_csv('csv/stop_times.csv')
stop_times.set_index(['trip_id','stop_sequence'], inplace=True)
print(len(stop_times.loc[1423492]))

# readability assginments for stops.csv
stops = pd.read_csv('csv/stops.csv')
stops.set_index(['stop_id'], inplace=True)

# for each trip_id
for i in trips_index:
    print('******NEW TRIP_ID******')
    print(i)
    i = i.astype(int)

    # for each stop_sequence in stop_times
    for x in range(len(stop_times.loc[i])):
        stop_lat = stop_times.loc[i,['stop_lat','stop_lon']].iloc[x,[0,1]][0]
        stop_lon = stop_times.loc[i,['stop_lat','stop_lon']].iloc[x,[0,1]][1]
        stop_coordinate = (stop_lat, stop_lon)
        print(stop_coordinate)

        # shape_id that matches trip_id
        print('**SHAPE_ID**')
        trips_shape_id = trips.loc[i,['shape_id']].iloc[0]
        trips_shape_id = int(trips_shape_id)
        print(trips_shape_id)

        smallest = 0

        for y in range(len(shapes.loc[trips_shape_id])):
            shape_lat = shapes.loc[trips_shape_id].iloc[y,[0,1]][0]
            shape_lon = shapes.loc[trips_shape_id].iloc[y,[0,1]][1]

            shape_coordinate = (shape_lat, shape_lon)

            haversined = haversine_mi(stop_coordinate, shape_coordinate)

            if smallest == 0 or haversined < smallest:
                smallest = haversined
                smallest_shape_pt_indexer = y
            else:
                pass

            print(haversined)
            print('{0:.20f}'.format(smallest))

        print('{0:.20f}'.format(smallest))
        print(smallest_shape_pt_indexer)

        # mark is_stop as 1
        shapes.iloc[smallest_shape_pt_indexer,[2]] = 1

        # replace coordinate value
        shapes.loc[trips_shape_id].iloc[y,[0,1]][0] = stop_lat
        shapes.loc[trips_shape_id].iloc[y,[0,1]][1] = stop_lon

shapes.to_csv('csv/shapes.csv', index=False)

2 个答案:

答案 0 :(得分:0)

您可以选择使用一些线程/工作人员来代替那些代码。

我建议使用Pool of Workes,因为它非常简单易用。

在:

for i in trips_index:

您可以使用以下内容:

from multiprocessing import Pool

pool = Pool(processes=4)
result = pool.apply_async(func, trips_index)

func 方法就像:

def func(i):
   #code here

你可以简单地将整个 for 循环放在这个方法中。 它会使它在这个例子中使用4个子进程,git它是一个很好的改进。

答案 1 :(得分:0)

要考虑的一件事是,一组行程通常会有相同的停靠顺序和相同的形状数据(行程之间的唯一区别是时间)。因此,为(stop_id,shape_id)缓存find-nearest-point-on-shape操作可能是有意义的。我打赌这会使你的运行时间降低一个数量级。