我正在寻找一种方法,使用PySpark将函数应用于RDD并将结果放入新列中。使用DataFrames,它看起来很简单: 鉴于:
rdd = sc.parallelize([(u'1751940903', u'2014-06-19', '2016-10-19'), (u'_guid_VubEgxvPPSIb7W5caP-lXg==', u'2014-09-10', '2016-10-19')])
我的代码可能如下所示:
df= rdd.toDF(['gigya', 'inscription','d_date'])
df.show()
+--------------------+-------------------------+----------+
| gigya| inscription| d_date|
+--------------------+-------------------------+----------+
| 1751940903| 2014-06-19|2016-10-19|
|_guid_VubEgxvPPSI...| 2014-09-10|2016-10-19|
+--------------------+-------------------------+----------+
然后:
from pyspark.sql.functions import split, udf, col
get_period_day = udf(lambda item : datetime.strptime(item, "%Y-%m-%d").timetuple().tm_yday)
df.select('d_date', 'gigya', 'inscription', get_period_day(col('d_date')).alias('period_day')).show()
+----------+--------------------+-------------------------+----------+
| d_date| gigya|inscription_service_6Play|period_day|
+----------+--------------------+-------------------------+----------+
|2016-10-19| 1751940903| 2014-06-19| 293|
|2016-10-19|_guid_VubEgxvPPSI...| 2014-09-10| 293|
+----------+--------------------+-------------------------+----------+
有没有办法在不需要将我的RDD转换为DataFrame的情况下执行相同的操作?有地图的例子..
此代码可以让我从预期结果中获得一部分:
rdd.map(lambda x: datetime.strptime(x[1], '%Y-%m-%d').timetuple().tm_yday).cache().collect()
帮助?
答案 0 :(得分:2)
尝试:
rdd.map(lambda x:
x + (datetime.strptime(x[1], '%Y-%m-%d').timetuple().tm_yday, ))
或:
def g(x):
return x + (datetime.strptime(x[1], '%Y-%m-%d').timetuple().tm_yday, )
rdd.map(g)