为什么这个PageRank作业使用数据集比RDD慢得多?

时间:2017-11-30 17:53:28

标签: java apache-spark spark-dataframe apache-spark-dataset

我使用较新的Dataset API在Java中实现了PageRank的this example。当我针对使用旧版RDD API的示例对我的代码进行基准测试时,我发现我的代码需要186秒,而基线只需要109秒。造成这种差异的原因是什么? (旁注:即使数据库只包含少量条目,Spark也需要几百秒才能正常运行?)

我的代码:

Dataset<Row> outLinks = spark.read().jdbc("jdbc:postgresql://127.0.0.1:5432/postgres", "storagepage_outlinks", props);
Dataset<Row> page = spark.read().jdbc("jdbc:postgresql://127.0.0.1:5432/postgres", "pages", props);

outLinks = page.join(outLinks, page.col("id").equalTo(outLinks.col("storagepage_id")));
outLinks = outLinks.distinct().groupBy(outLinks.col("url")).agg(collect_set("outlinks")).cache();

Dataset<Row> ranks = outLinks.map(row -> new Tuple2<>(row.getString(0), 1.0), Encoders.tuple(Encoders.STRING(), Encoders.DOUBLE())).toDF("url", "rank");

for (int i = 0; i < iterations; i++) {
    Dataset<Row> joined = outLinks.join(ranks, new Set.Set1<>("url").toSeq());
    Dataset<Row> contribs = joined.flatMap(row -> {
        List<String> links = row.getList(1);
        double rank = row.getDouble(2);
        return links
                .stream()
                .map(s -> new Tuple2<>(s, rank / links.size()))
                .collect(Collectors.toList()).iterator();
    }, Encoders.tuple(Encoders.STRING(), Encoders.DOUBLE())).toDF("url", "num");

    Dataset<Tuple2<String, Double>> reducedByKey =
            contribs.groupByKey(r -> r.getString(0), Encoders.STRING())
            .mapGroups((s, iterator) -> {
                double sum = 0;
                while (iterator.hasNext()) {
                    sum += iterator.next().getDouble(1);
                }
                return new Tuple2<>(s, sum);
            }, Encoders.tuple(Encoders.STRING(), Encoders.DOUBLE()));
    ranks = reducedByKey.map(t -> new Tuple2<>(t._1, .15 + t._2 * .85),
            Encoders.tuple(Encoders.STRING(), Encoders.DOUBLE())).toDF("url", "rank");
}
ranks.show();

使用RDD的示例代码(适用于从我的数据库中读取):

Dataset<Row> outLinks = spark.read().jdbc("jdbc:postgresql://127.0.0.1:5432/postgres", "storagepage_outlinks", props);
Dataset<Row> page = spark.read().jdbc("jdbc:postgresql://127.0.0.1:5432/postgres", "pages", props);

outLinks = page.join(outLinks, page.col("id").equalTo(outLinks.col("storagepage_id")));
outLinks = outLinks.distinct().groupBy(outLinks.col("url")).agg(collect_set("outlinks")).cache(); // TODO: play with this cache
JavaPairRDD<String, Iterable<String>> links = outLinks.javaRDD().mapToPair(row -> new Tuple2<>(row.getString(0), row.getList(1)));

// Loads all URLs with other URL(s) link to from input file and initialize ranks of them to one.
JavaPairRDD<String, Double> ranks = links.mapValues(rs -> 1.0);

// Calculates and updates URL ranks continuously using PageRank algorithm.
for (int current = 0; current < 20; current++) {
    // Calculates URL contributions to the rank of other URLs.
    JavaPairRDD<String, Double> contribs = links.join(ranks).values()
            .flatMapToPair(s -> {
                int urlCount = size(s._1());
                List<Tuple2<String, Double>> results = new ArrayList<>();
                for (String n : s._1) {
                    results.add(new Tuple2<>(n, s._2() / urlCount));
                }
                return results.iterator();
            });

    // Re-calculates URL ranks based on neighbor contributions.
    ranks = contribs.reduceByKey((x, y) -> x + y).mapValues(sum -> 0.15 + sum * 0.85);
}

// Collects all URL ranks and dump them to console.
List<Tuple2<String, Double>> output = ranks.collect();
for (Tuple2<?,?> tuple : output) {
    System.out.println(tuple._1() + " has rank: " + tuple._2() + ".");
}

1 个答案:

答案 0 :(得分:1)

TL; DR 这可能是好事Avoid GroupByKey

很难确定,但您的Dataset代码相当于groupByKey

groupByKey(...).mapGroups(...)

这意味着它首先进行混洗,然后减少数据。

您的RDD使用reduceByKey - 这可以通过应用本地缩减来减少随机播放的大小。如果您希望此代码在某种程度上相同,则应使用groupByKey(...).mapGroups(...)重写groupByKey(...).reduceGroups(...)

另一种可能的候选者是配置。 spark.sql.shuffle.partitions的默认值为200,将用于Dataset聚合。如果

  

数据库只包含少数几个条目?

这是一种严重的矫枉过正。

RDD将使用spark.default.parallelism或基于父数据的值,这些数据通常要小得多。

相关问题