我正在尝试开发一种用于在图表中查找关节点的算法。我正在使用Spark的GraphX库,它有许多有用的算法,如PageRank和连接组件,但据我所知,没有任何东西可以找到切边或清晰点。
我发现an algorithm找到用Java编写的清晰点。我将算法翻译成Scala,如下所示:
// A recursive function that find articulation points using DFS
// u --> The vertex to be visited next
// visited[] --> keeps tract of visited vertices
// disc[] --> Stores discovery times of visited vertices
// parent[] --> Stores parent vertices in DFS tree
// ap[] --> Store articulation points
@tailrec
def APUtil(u: Int, visited: Array[Boolean], disc: Array[Int], low: Array[Int], parent: Array[Int], ap: Array[Boolean]): Unit = {
var children = 0;
visited(u) = true;
time += 1;
disc(u) = time;
low(u) = time;
val vertexId = graph.vertices.zipWithIndex.map {
case(k,v) => (v,k)
}.lookup(u).toMap.keys.iterator.next().toInt;
val direction: EdgeDirection = EdgeDirection.Out;
val neighbors = graph.collectNeighborIds(direction).lookup(vertexId)(0);
for(neighborId <- neighbors) {
val index = graph.vertices.zipWithIndex.filter { _._1._1 == neighborId }.collect()(0)._2.toInt;
if(!visited(index)) {
children += 1;
parent(index) = u;
APUtil(index, visited, disc, low, parent, ap);
low(u) = math.min(low(u), low(index));
if(parent(u) == -1 && children > 1) {
ap(u) = true;
}
if(parent(u) != -1 && low(index) >= disc(u)) {
ap(u) = true;
}
} else if (index != parent(u)) {
low(u) = math.min(low(u), disc(index));
}
}
}
当我在小数据上运行时,算法正常工作。但是,当我使用更大的文件时,Spark会向我抛出一个StackOverflowError。我读到了关于尾递归的内容,而且我无法弄清楚如何将这个递归函数转换为尾递归函数。
任何帮助表示赞赏!