在pyspark中处理大数据的优化

时间:2016-10-17 15:05:04

标签: python python-2.7 apache-spark pyspark pyspark-sql

不是问题 - >需要建议

我在20gb + 6gb = 26Gb csv文件上运行,1 + 3(1主机,3从机(每个16 GB RAM)。

这就是我正在做的操作

df = spark.read.csv() #20gb
df1 = spark.read.csv() #6gb
df_merged= df.join(df1,'name','left') ###merging 
df_merged.persists(StorageLevel.MEMORY_AND_DISK) ##if i do MEMORY_ONLY will I gain more performance?????
print('No. of records found: ',df_merged.count())  ##just ensure persist by calling an action
df_merged.registerTempTable('table_satya')
query_list= [query1,query2,query3]  ###sql query string to be fired
city_list = [city1, city2,city3...total 8 cities]
file_index=0 ###will create files based on increasing index
for query_str in query_list:
   result = spark.sql(query_str) #ex: select * from table_satya where date >= '2016-01-01'
   #result.persist()  ###willit increase performance
   for city in city_list:
        df_city = result.where(result.city_name==city)
        #store as csv file(pandas style single file)
        df_city.collect().toPandas().to_csv('file_'+str(file_index)+'.csv',index=False)
        file_index += 1

df_merged.unpersist()  ###do I even need to do it or Spark can handle it internally

目前需要花费大量时间。

#persist(On count())-34 mins.
#each result(on firing each sql query)-around (2*8=16min toPandas() Op)
#          #for each toPandas().to_csv() - around 2 min each
#for 3 query 16*3= 48min
#total 34+48 = 82 min  ###Need optimization seriously

所以有人建议我如何优化上述过程以获得更好的性能(时间和内存)。

为什么我担心的是:我在Python-Pandas平台(带有序列化pickle数据的64Gb单机)上做了以上操作,我能够在8-12分钟内完成。由于我的数据量似乎在增长,因此需要采用像spark这样的技术。

先谢谢。 :)

1 个答案:

答案 0 :(得分:1)

我认为最好的办法是将源数据缩小到适当的大小。您提到您的源数据有90个城市,但您只对其中的8个城市感兴趣。过滤掉您不想要的城市,并将您想要的城市保存在单独的csv文件中:

import itertools
import csv

city_list = [city1, city2,city3...total 8 cities]

with open('f1.csv', 'rb') as f1, open('f2.csv', 'rb') as f2:
    r1, r2 = csv.reader(f1), csv.reader(f2)
    header = next(r1)
    next(r2) # discard headers in second file
    city_col = header.index('city_name')
    city_files = []
    city_writers = {}
    try:
    for city in city_list:
            f = open(city+'.csv', 'wb')
            city_files.append(f)
            writer = csv.writer(f)
            writer.writerow(header)
            city_writers[city] = writer
        for row in itertools.chain(r1, r2):
            city_name = row[city_col]
            if city_name in city_writers:
                city_writers[city_name].writerow(row)
    finally:
        for f in city_files:
            f.close()

在迭代每个城市之后,为城市创建一个DataFrame,然后在嵌套循环中运行您的三个查询。每个DataFrame应该在内存中没有问题,并且查询应该快速运行,因为它们运行在一个小得多的数据集上。

相关问题