我正在Spark
使用Ipython
并且RDD
在打印时包含此格式的数据:
print rdd1.collect()
[u'2010-12-08 00:00:00', u'2010-12-18 01:20:00', u'2012-05-13 00:00:00',....]
每个数据都是datetimestamp
,我想在此RDD
中找到最小值和最大值。我怎么能这样做?
答案 0 :(得分:5)
您可以使用aggregate
函数(有关其工作原理的说明,请参阅:What is the equivalent implementation of RDD.groupByKey() using RDD.aggregateByKey()?)
from datetime import datetime
rdd = sc.parallelize([
u'2010-12-08 00:00:00', u'2010-12-18 01:20:00', u'2012-05-13 00:00:00'])
def seq_op(acc, x):
""" Given a tuple (min-so-far, max-so-far) and a date string
return a tuple (min-including-current, max-including-current)
"""
d = datetime.strptime(x, '%Y-%m-%d %H:%M:%S')
return (min(d, acc[0]), max(d, acc[1]))
def comb_op(acc1, acc2):
""" Given a pair of tuples (min-so-far, max-so-far)
return a tuple (min-of-mins, max-of-maxs)
"""
return (min(acc1[0], acc2[0]), max(acc1[1], acc2[1]))
# (initial-min <- max-date, initial-max <- min-date)
rdd.aggregate((datetime.max, datetime.min), seq_op, comb_op)
## (datetime.datetime(2010, 12, 8, 0, 0), datetime.datetime(2012, 5, 13, 0, 0))
或DataFrames
:
from pyspark.sql import Row
from pyspark.sql.functions import from_unixtime, unix_timestamp, min, max
row = Row("ts")
df = rdd.map(row).toDF()
df.withColumn("ts", unix_timestamp("ts")).agg(
from_unixtime(min("ts")).alias("min_ts"),
from_unixtime(max("ts")).alias("max_ts")
).show()
## +-------------------+-------------------+
## | min_ts| max_ts|
## +-------------------+-------------------+
## |2010-12-08 00:00:00|2012-05-13 00:00:00|
## +-------------------+-------------------+
答案 1 :(得分:3)
如果您的RDD由日期时间对象组成,那么使用
会出现什么问题rdd1.min()
rdd1.max()
此示例适用于我
rdd = sc.parallelize([u'2010-12-08 00:00:00', u'2010-12-18 01:20:00', u'2012-05-13 00:00:00'])
from datetime import datetime
rddT = rdd.map(lambda x: datetime.strptime(x, "%Y-%m-%d %H:%M:%S")).cache()
print rddT.min()
print rddT.max()
答案 2 :(得分:0)
如果您使用的是数据框,那么您什么也不需要:
import pyspark.sql.functions as F
#Anohter imports, session, attributions etc
# This brings min and max(considering you need only min and max values)
df.select(F.min('datetime_column_name'),F.max('datetime_column_name')).show()
就是这样!