在地图内编写复杂的函数减少pyspark

时间:2018-12-18 20:49:42

标签: apache-spark dictionary pyspark bigdata reduce

我有这样一个带有数据的csv文件(一个大文件> 20GB),如下所示:

ObjectID,Lon,Lat,Speed,GPSTime
182163,116.367520,40.024680,29.00,2016-07-04 09:01:09.000
116416,116.694693,39.785382,0.00,2016-07-04 09:01:17.000

我要使用pyspark(rdd,地图和reduce),处理地理数据并检查每一行(如果纬度,经度是否在多边形内),然后将该行写入输出文件。

这是不使用spark的原始代码。

polygon = Polygon(data['features'][cityIdx]['geometry']['coordinates'][0])
        with open(outFileName, 'w', newline='') as outfile:
            writer = csv.writer(outfile)
            for chunk in pd.read_csv(inFileName,chunksize=chunksize,sep=','):
                ObjectIDs = list(chunk['ObjectID'])    
                Lons = list(chunk['Lon'])
                Lats = list(chunk['Lat'])
                Speeds = list(chunk['Speed'])
                Directs = list(chunk['Direct'])
                Mileages = list(chunk['Mileage'])
                GPSTimes = list(chunk['GPSTime'])
                for ObjectID,Lon,Lat,Speed,Direct,Mileage,GPSTime in zip(ObjectIDs,Lons,Lats,Speeds,Directs,Mileages,GPSTimes):
                    point = Point(Lon, Lat)
                    if(polygon.contains(point)):
                        writer.writerow([ObjectID,Lon,Lat,Speed,Direct,Mileage,GPSTime])

如何使用.rdd.map(fun).reduce(fun)做到这一点。我想到了lambda表达式,但无法制定可运行火花的代码。

0 个答案:

没有答案