在spark

时间:2018-01-17 10:10:16

标签: scala apache-spark

我想在spark中使用KeyValueGroupedDataset上的cogroup方法。这是一个scala尝试,但收到错误:

import org.apache.spark.sql.functions._
val x1 = Seq(("a", 36), ("b", 33), ("c", 40), ("a", 38), ("c", 39)).toDS
val g1 = x1.groupByKey(_._1)
val x2 = Seq(("a", "ali"), ("b", "bob"), ("c", "celine"), ("a", "amin"), ("c", "cecile")).toDS
val g2 = x2.groupByKey(_._1)
val cog = g1.cogroup(g2, (k: Long, iter1:Iterator[(String, Int)], iter2:Iterator[(String, String)]) =>  iter1);

但是收到错误:

<console>:34: error: overloaded method value cogroup with alternatives:
  [U, R](other: org.apache.spark.sql.KeyValueGroupedDataset[String,U], f: org.apache.spark.api.java.function.CoGroupFunction[String,(String, Int),U,R], encoder: org.apache.spark.sql.Encoder[R])org.apache.spark.sql.Dataset[R] <and>
  [U, R](other: org.apache.spark.sql.KeyValueGroupedDataset[String,U])(f: (String, Iterator[(String, Int)], Iterator[U]) => TraversableOnce[R])(implicit evidence$11: org.apache.spark.sql.Encoder[R])org.apache.spark.sql.Dataset[R]
 cannot be applied to (org.apache.spark.sql.KeyValueGroupedDataset[String,(String, String)], (Long, Iterator[(String, Int)], Iterator[(String, String)]) => Iterator[(String, Int)])
       val cog = g1.cogroup(g2, (k: Long, iter1:Iterator[(String, Int)], iter2:Iterator[(String, String)]) =>  iter1);

我在JAVA中遇到同样的错误。

1 个答案:

答案 0 :(得分:1)

您尝试使用的

cogroup是curry,因此您必须单独为数据集和函数调用它。密钥类型中也存在类型不匹配:

g1.cogroup(g2)(
  (k: String, it1: Iterator[(String, Int)], it2: Iterator[(String, String)]) => 
    it1)

或只是:

g1.cogroup(g2)((_, it1, _) => it1)

在Java中,我使用CoGroupFunction变体:

import org.apache.spark.api.java.function.CoGroupFunction;
import org.apache.spark.sql.Encoders;

g1.cogroup(
  g2,
  (CoGroupFunction<String, Tuple2<String, Integer>, Tuple2<String, String>, Tuple2<String, Integer>>) (key, it1, it2) -> it1,
  Encoders.tuple(Encoders.STRING(), Encoders.INT()));

其中g1g2分别为KeyValueGroupedDataset<String, Tuple2<String, Integer>KeyValueGroupedDataset<String, Tuple2<String, String>>