Spark统计函数Python

时间:2015-03-02 18:54:08

标签: python hadoop apache-spark

我问了一个关于统计函数的问题并得到了答案,但我正在寻找另一种方法:

我觉得奇怪的是: 这有效:

myData = dataSplit.map(lambda arr: (arr[1]))
myData2 = myData.map(lambda line: line.split(',')).map(lambda fields: ("Column", float(fields[0]))).groupByKey()
stats[1] = myData2.map(lambda (Column, values): (min(values))).collect()

但是当我添加这个功能时:

stats[4] = myData2.map(lambda (Column, values): (values)).variance()

失败了。

所以我放了一些印刷品:

myData = dataSplit.map(lambda arr: (arr[1]))
print myData.collect()
myData2 = myData.map(lambda line: line.split(',')).map(lambda fields: ("Column", float(fields[0]))).groupByKey()
print myData2.map(lambda (Column, values): (values)).collect()

打印myData:

[u'18964', u'18951', u'18950', u'18949', u'18960', u'18958', u'18956', u'19056', u'18948', u'18969', u'18961', u'18959', u'18957', u'18968', u'18966', u'18967', u'18971', u'18972', u'18353', u'18114', u'18349', u'18348', u'18347', u'18346', u'19053', u'19052', u'18305', u'18306', u'18318', u’18317']

打印myData2:

[<pyspark.resultiterable.ResultIterable object at 0x7f3f7d3e0710>]

1 个答案:

答案 0 :(得分:-1)

解决了

 print  myData.map(lambda line: line.split(',')).map(lambda fields: ("Column", float(fields[0]))).map(lambda (column, value) : (value)).stdev()