Spark数据帧将窗口函数的结果添加到常规函数,如max。自动递增

时间:2016-10-04 00:34:47

标签: sql apache-spark dataframe pyspark spark-dataframe

我需要为id字段生成自动递增的值。我的方法是使用windows函数和max函数。

我试图找到纯数据帧解决方案(没有rdd)。

所以在我right-outer join之后我得到了这个数据帧:

df2 = sqlContext.createDataFrame([(1,2), (3, None), (5, None)], ['someattr', 'id'])

# notice null values? it's a new records that don't have id just yet.
# The task is to generate them. Preferably with one query.

df2.show()

+--------+----+
|someattr|  id|
+--------+----+
|       1|   2|
|       3|null|
|       5|null|
+--------+----+

我需要为id字段生成自动递增的值。我的方法是使用Windows函数

df2.withColumn('id', when(df2.id.isNull(), row_number().over(Window.partitionBy('id').orderBy('id')) + max('id')).otherwise(df2.id))

当我这样做时,它会引发异常:

AnalysisException                         Traceback (most recent call last)
<ipython-input-102-b3221098e895> in <module>()
     10 
     11 
---> 12 df2.withColumn('hello', when(df2.id.isNull(), row_number().over(Window.partitionBy('id').orderBy('id')) + max('id')).otherwise(df2.id)).show()

/Users/ipolynets/workspace/spark-2.0.0/python/pyspark/sql/dataframe.pyc in withColumn(self, colName, col)
   1371         """
   1372         assert isinstance(col, Column), "col should be Column"
-> 1373         return DataFrame(self._jdf.withColumn(colName, col._jc), self.sql_ctx)
   1374 
   1375     @ignore_unicode_prefix

/Users/ipolynets/workspace/spark-2.0.0/python/lib/py4j-0.10.1-src.zip/py4j/java_gateway.py in __call__(self, *args)
    931         answer = self.gateway_client.send_command(command)
    932         return_value = get_return_value(
--> 933             answer, self.gateway_client, self.target_id, self.name)
    934 
    935         for temp_arg in temp_args:

/Users/ipolynets/workspace/spark-2.0.0/python/pyspark/sql/utils.pyc in deco(*a, **kw)
     67                                              e.java_exception.getStackTrace()))
     68             if s.startswith('org.apache.spark.sql.AnalysisException: '):
---> 69                 raise AnalysisException(s.split(': ', 1)[1], stackTrace)
     70             if s.startswith('org.apache.spark.sql.catalyst.analysis'):
     71                 raise AnalysisException(s.split(': ', 1)[1], stackTrace)

AnalysisException: u"expression '`someattr`' is neither present in the group by, nor is it an aggregate function. Add to group by or wrap in first() (or first_value) if you don't care which value you get.;"

不确定这个例外抱怨说实话。

请注意我如何将window函数添加到常规max()函数?

row_number().over(Window.partitionBy('id').orderBy('id')) + max('id')

我不确定是否允许这样做。

哦......这是期望查询的预期输出。你可能已经想过了。

+--------+----+
|someattr|  id|
+--------+----+
|       1|   2|
|       3|   3|
|       5|   4|
+--------+----+

1 个答案:

答案 0 :(得分:1)

您正在添加列,因此在结果DataFrame中还会有someattr列。

您必须在someattr中加入group by或在某些汇总功能中使用它。

但是,这样做更简单:

df2.registerTempTable("test")
df3 = sqlContext.sql("""
    select t.someattr, nvl (t.id, row_number(partition by id) over () + maxId.maxId) as id
    from test t
    cross join (select max(id) as maxId from test) as maxId
""")

当然,您可以将其转换为DSL,但对于我来说,SQL对于此任务来说似乎更容易