pyspark - 分组和计算数据

时间:2018-03-05 12:43:53

标签: python apache-spark pyspark rdd

我有以下csv文件。

Index,Arrival_Time,Creation_Time,x,y,z,User,Model,Device,gt
0,1424696633908,1424696631913248572,-5.958191,0.6880646,8.135345,a,nexus4,nexus4_1,stand
1,1424696633909,1424696631918283972,-5.95224,0.6702118,8.136536,a,nexus4,nexus4_1,stand
2,1424696633918,1424696631923288855,-5.9950867,0.6535491999999999,8.204376,a,nexus4,nexus4_1,stand
3,1424696633919,1424696631928385290,-5.9427185,0.6761626999999999,8.128204,a,nexus4,nexus4_1,stand

我必须创建一个RDD,其中USER MODEL和GT是PRIMARY KEY,我不知道是否必须使用它们作为元组。

然后,当我有主键字段时,我必须从'x','y'和'z'计算AVG,MAX和MIN。

这是一个输出:

User,Model,gt,media(x,y,z),desviacion(x,y,z),max(x,y,z),min(x,y,z)
a, nexus4,stand,-3.0,0.7,8.2,2.8,0.14,0.0,-1.0,0.8,8.2,-5.0,0.6,8.2

关于如何对它们进行分组的任何想法,例如从“x”获取媒体值

使用我当前的代码,我得到以下内容。

# Data loading

    lectura = sc.textFile("Phones_accelerometer.csv")

    datos = lectura.map(lambda x: ((x.split(",")[6], x.split(",")[7], x.split(",")[9]),(x.split(",")[3], x.split(",")[4], x.split(",")[5])))

    sumCount = datos.combineByKey(lambda value: (value, 1), lambda x, value: (x[0] + value, x[1] + 1), lambda x, y: (x[0] + y[0], x[1] + y[1]))

我的元组的一个例子:

   [(('a', 'nexus4', 'stand'), ('-5.958191', '0.6880646', '8.135345'))]

2 个答案:

答案 0 :(得分:1)

如果问题中给出的文件中有 csv数据,则可以使用sqlContext将其作为dataframe读取,并将相应的类型转换为< / p>

df = sqlContext.read.format("com.databricks.spark.csv").option("header", True).load("path to csv file")
import pyspark.sql.functions as F
import pyspark.sql.types as T
df = df.select(F.col('User'), F.col('Model'), F.col('gt'), F.col('x').cast('float'), F.col('y').cast('float'), F.col('z').cast('float'))

我只选择了主键和必要的列,它们应该为您提供

+----+------+-----+----------+---------+--------+
|User|Model |gt   |x         |y        |z       |
+----+------+-----+----------+---------+--------+
|a   |nexus4|stand|-5.958191 |0.6880646|8.135345|
|a   |nexus4|stand|-5.95224  |0.6702118|8.136536|
|a   |nexus4|stand|-5.9950867|0.6535492|8.204376|
|a   |nexus4|stand|-5.9427185|0.6761627|8.128204|
+----+------+-----+----------+---------+--------+

您的所有要求:中位数,偏差,最大值和最小值取决于xyz的列表。主键:User, Model and gt

因此,您需要groupBycollect_list 内置函数udf函数来计算所有需求。最后一步是将它们分成不同的列,如下所示

from math import sqrt
def calculation(array):
    num_items = len(array)
    print num_items, sum(array)
    mean = sum(array) / num_items
    differences = [x - mean for x in array]
    sq_differences = [d ** 2 for d in differences]
    ssd = sum(sq_differences)
    variance = ssd / (num_items - 1)
    sd = sqrt(variance)
    return [mean, sd, max(array), min(array)]

calcUdf = F.udf(calculation, T.ArrayType(T.FloatType()))

df.groupBy('User', 'Model', 'gt')\
    .agg(calcUdf(F.collect_list(F.col('x'))).alias('x'), calcUdf(F.collect_list(F.col('y'))).alias('y'), calcUdf(F.collect_list(F.col('z'))).alias('z'))\
    .select(F.col('User'), F.col('Model'), F.col('gt'), F.col('x')[0].alias('median_x'), F.col('y')[0].alias('median_y'), F.col('z')[0].alias('median_z'), F.col('x')[1].alias('deviation_x'), F.col('y')[1].alias('deviation_y'), F.col('z')[1].alias('deviation_z'), F.col('x')[2].alias('max_x'), F.col('y')[2].alias('max_y'), F.col('z')[2].alias('max_z'), F.col('x')[3].alias('min_x'), F.col('y')[3].alias('min_y'), F.col('z')[3].alias('min_z'))\
    .show(truncate=False)

所以最后你应该

+----+------+-----+---------+---------+--------+-----------+-----------+-----------+----------+---------+--------+----------+---------+--------+
|User|Model |gt   |median_x |median_y |median_z|deviation_x|deviation_y|deviation_z|max_x     |max_y    |max_z   |min_x     |min_y    |min_z   |
+----+------+-----+---------+---------+--------+-----------+-----------+-----------+----------+---------+--------+----------+---------+--------+
|a   |nexus4|stand|-5.962059|0.6719971|8.151115|0.022922019|0.01436464 |0.0356973  |-5.9427185|0.6880646|8.204376|-5.9950867|0.6535492|8.128204|
+----+------+-----+---------+---------+--------+-----------+-----------+-----------+----------+---------+--------+----------+---------+--------+

我希望答案很有帮助。

答案 1 :(得分:0)

您必须使用groupByKey来获得中位数。虽然{{3}}通常不是首选,但找不到数字列表的中值不能轻易并行化。计算中位数的逻辑需要整个数字列表。 groupByKey是当您需要同时处理密钥的所有值时使用的聚合方法

另外,正如评论中所提到的,使用Spark DataFrames可以更轻松地完成此任务。