我正在尝试这个:
aID Name
1 aaa
2 bbb
aID Stuff
1 01
1 02
1 06
2 01
2 03
这不是给出预期的输出,我怎样才能以最快的方式实现它呢?
答案 0 :(得分:2)
目前还不清楚预期的输出是什么,但我想你想要这样的东西:
import org.apache.spark.sql.functions.{count, col, when}
val streams = df.select($"stream").distinct.collect.map(_.getString(0))
val exprs = streams.map(s => count(when($"stream" === s, 1)).alias(s"stream_$s"))
df
.groupBy("class")
.agg(exprs.head, exprs.tail: _*)
// +------+--------------+----------+-----------+
// | class|stream_science|stream_law|stream_arts|
// +------+--------------+----------+-----------+
// |name 1| 2| 2| 1|
// |name 2| 2| 2| 2|
// +------+--------------+----------+-----------+
如果您不关心名称并且只有一个组列,则只需使用DataFrameStatFunctions.crosstab
:
df.stat.crosstab("class", "stream")
// +------------+---+----+-------+
// |class_stream|law|arts|science|
// +------------+---+----+-------+
// | name 1| 2| 1| 2|
// | name 2| 2| 2| 2|
// +------------+---+----+-------+
答案 1 :(得分:0)
您可以按列分组,而不是按单列分组,然后进行过滤。因为我在Scala中不够流畅,所以下面是Python中的代码片段。请注意,我已经更改了#34; stream"和"班"到" dept"和"名称"避免名字与Spark" stream"和"班"类型。
import pyspark.sql
from pyspark.sql import Row
hc = HiveContext(sc)
obj = [
{"class":"name 1","stream":"science"},
{"class":"name 1","stream":"arts"}
{"class":"name 1","stream":"science"},
{"class":"name 1","stream":"law"},
{"class":"name 1","stream":"law"},
{"class":"name 2","stream":"science"},
{"class":"name 2","stream":"arts"},
{"class":"name 2","stream":"law"},
{"class":"name 2","stream":"science"},
{"class":"name 2","stream":"arts"},
{"class":"name 2","stream":"law"}
]
rdd = sc.parallelize(obj).map(labmda i: Row(dept=i['stream'], name=i['class']))
df = hc.createDataFrame(rdd)
df.groupby(df.dept, df.name).count().collect()
这导致以下输出 -
[
Row(dept='science', name='name 1', count=2),
Row(dept='science', name='name 2', count=2),
Row(dept='arts', name='name 1', count=1),
Row(dept='arts', name='name 2', count=2),
Row(dept='law', name='name 1', count=2),
Row(dept='law', name='name 2', count=2)
]