我正在尝试使用spark构造区分矩阵,并对如何最佳地执行它感到困惑。我是新来的火花。我在下面举了一个小例子来说明我要做什么。
区别矩阵构造示例:
给出数据集D:
+----+-----+------+-----+
| id | a1 | a2 | a3 |
+----+-----+------+-----+
| 1 | yes | high | on |
| 2 | no | high | off |
| 3 | yes | low | off |
+----+-----+------+-----+
我的区别表是
+-------+----+----+----+
| id,id | a1 | a2 | a3 |
+-------+----+----+----+
| 1,2 | 1 | 0 | 1 |
| 1,3 | 0 | 1 | 1 |
| 2,3 | 1 | 1 | 0 |
+-------+----+----+----+
即,只要属性a i 有助于区分一对元组,则区分表具有1,否则为0。
我的数据集非常庞大,我想尽力做到这一点。以下是我想到的方法:
我的问题是:
在第一种方法中,spark是否会在内部为循环处理自动优化嵌套以进行循环设置?
在第二种方法中,使用cartesian()会导致存储中间RDD的额外存储开销。有什么办法可以避免这种存储开销并获得最终的区别表?
这些方法中哪个更好,还有其他方法可以有效地构造区分矩阵(时空)吗?
答案 0 :(得分:0)
对于此数据框:
scala> val df = List((1, "yes", "high", "on" ), (2, "no", "high", "off"), (3, "yes", "low", "off") ).toDF("id", "a1", "a2", "a3")
df: org.apache.spark.sql.DataFrame = [id: int, a1: string ... 2 more fields]
scala> df.show
+---+---+----+---+
| id| a1| a2| a3|
+---+---+----+---+
| 1|yes|high| on|
| 2| no|high|off|
| 3|yes| low|off|
+---+---+----+---+
我们可以结合使用crossJoin
来构建笛卡尔积。但是,列名将是模棱两可的(我真的不知道如何轻松地处理)。为此,我们创建另一个数据框:
scala> val df2 = df.toDF("id_2", "a1_2", "a2_2", "a3_2")
df2: org.apache.spark.sql.DataFrame = [id_2: int, a1_2: string ... 2 more fields]
scala> df2.show
+----+----+----+----+
|id_2|a1_2|a2_2|a3_2|
+----+----+----+----+
| 1| yes|high| on|
| 2| no|high| off|
| 3| yes| low| off|
+----+----+----+----+
在此示例中,我们可以通过使用id < id_2
进行过滤来获得组合。
scala> val xp = df.crossJoin(df2)
xp: org.apache.spark.sql.DataFrame = [id: int, a1: string ... 6 more fields]
scala> xp.show
+---+---+----+---+----+----+----+----+
| id| a1| a2| a3|id_2|a1_2|a2_2|a3_2|
+---+---+----+---+----+----+----+----+
| 1|yes|high| on| 1| yes|high| on|
| 1|yes|high| on| 2| no|high| off|
| 1|yes|high| on| 3| yes| low| off|
| 2| no|high|off| 1| yes|high| on|
| 2| no|high|off| 2| no|high| off|
| 2| no|high|off| 3| yes| low| off|
| 3|yes| low|off| 1| yes|high| on|
| 3|yes| low|off| 2| no|high| off|
| 3|yes| low|off| 3| yes| low| off|
+---+---+----+---+----+----+----+----+
scala> val filtered = xp.filter($"id" < $"id_2")
filtered: org.apache.spark.sql.Dataset[org.apache.spark.sql.Row] = [id: int, a1: string ... 6 more fields]
scala> filtered.show
+---+---+----+---+----+----+----+----+
| id| a1| a2| a3|id_2|a1_2|a2_2|a3_2|
+---+---+----+---+----+----+----+----+
| 1|yes|high| on| 2| no|high| off|
| 1|yes|high| on| 3| yes| low| off|
| 2| no|high|off| 3| yes| low| off|
+---+---+----+---+----+----+----+----+
至此,问题已基本解决。要获得最终表,我们可以在每个列对上使用when().otherwise()
语句,或者像我在此处所做的那样使用UDF:
scala> val dist = udf((a:String, b: String) => if (a != b) 1 else 0)
dist: org.apache.spark.sql.expressions.UserDefinedFunction = UserDefinedFunction(<function2>,IntegerType,Some(List(StringType, StringType)))
scala> val distinction = filtered.select($"id", $"id_2", dist($"a1", $"a1_2").as("a1"), dist($"a2", $"a2_2").as("a2"), dist($"a3", $"a3_2").as("a3"))
distinction: org.apache.spark.sql.DataFrame = [id: int, id_2: int ... 3 more fields]
scala> distinction.show
+---+----+---+---+---+
| id|id_2| a1| a2| a3|
+---+----+---+---+---+
| 1| 2| 1| 0| 1|
| 1| 3| 0| 1| 1|
| 2| 3| 1| 1| 0|
+---+----+---+---+---+