编辑 - 我发现这本书是为scala编写的1.6
,其余部分是2.11
。
我正在尝试实施Michael Malak和Robin East的Spark GraphX in Action
书中的加权最短路径算法。有问题的部分是清单6.4“执行使用面包屑的最短路径算法”第6章here。
我有自己的图表,我是从两个RDD创建的。有344436
个顶点和772983
个边。我可以使用原生GraphX库执行未加权的最短路径计算,我对图形构造很有信心。
在这种情况下,我使用他们的Dijkstra实现如下:
val my_graph: Graph[(Long),Double] = Graph.apply(verticesRDD, edgesRDD).cache()
def dijkstra[VD](g:Graph[VD,Double], origin:VertexId) = {
var g2 = g.mapVertices(
(vid,vd) => (false, if (vid == origin) 0 else Double.MaxValue, List[VertexId]())
)
for (i <- 1L to g.vertices.count-1) {
val currentVertexId = g2.vertices
.filter(!_._2._1)
.fold((0L, (false, Double.MaxValue, List[VertexId]())))(
(a,b) => if (a._2._2 < b._2._2) a else b)
)
._1
val newDistances = g2.aggregateMessages[(Double, List[VertexId])](
ctx => if (ctx.srcId == currentVertexId) {
ctx.sendToDst((ctx.srcAttr._2 + ctx.attr, ctx.srcAttr._3 :+ ctx.srcId))
},
(a,b) => if (a._1 < b._1) a else b
)
g2 = g2.outerJoinVertices(newDistances)((vid, vd, newSum) => {
val newSumVal = newSum.getOrElse((Double.MaxValue,List[VertexId]()))
(
vd._1 || vid == currentVertexId,
math.min(vd._2, newSumVal._1),
if (vd._2 < newSumVal._1) vd._3 else newSumVal._2
)
})
}
g.outerJoinVertices(g2.vertices)((vid, vd, dist) =>
(vd, dist.getOrElse((false,Double.MaxValue,List[VertexId]()))
.productIterator.toList.tail
))
}
// Path Finding - random node from which to find all paths
val v1 = 4000000028222916L
然后我用我的图形和随机顶点ID调用它们的函数。以前我遇到的问题是v1
未被识别为long
类型,L
后缀解决了此问题。
val results = dijkstra(my_graph, 1L).vertices.map(_._2).collect
println(results)
但是,这将返回以下内容:
Error: Exception in thread "main" java.lang.NoSuchMethodError: scala.runtime.ObjectRef.create(Ljava/lang/Object;)Lscala/runtime/ObjectRef;
at GraphX$.dijkstra$1(GraphX.scala:51)
at GraphX$.main(GraphX.scala:85)
at GraphX.main(GraphX.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
第51行是指var g2 = g.mapVertices(
行
第85行引用第val results = dijkstra(my_graph, 1L).vertices.map(_._2).collect
行
这个例外指的是什么方法?我可以使用sbt
打包而不会出错,我无法看到我称之为whcih的方法不存在。
答案 0 :(得分:6)
问题不是版本错误,也不是缺少实现,而是来自编译器的误导性错误。
好的,这就是事情:在调查代码之后,我注意到以下部分包含一个额外的右括号:
val currentVertexId: VertexId = g2.vertices.filter(!_._2._1)
.fold((0L, (false, Double.MaxValue, List[VertexId]())))(
(a, b) => if (a._2._2 < b._2._2) a else b))._1
^
|
你只需要删除那些额外的括号,它就能完美地运作。这是完整的代码:
// scala> :pa
// Entering paste mode (ctrl-D to finish)
import org.apache.spark.graphx._
def dijkstra[VD](g: Graph[VD, Double], origin: VertexId) = {
var g2 = g.mapVertices(
(vid, vd) => (false, if (vid == origin) 0 else Double.MaxValue, List[VertexId]())
)
for (i <- 1L to g.vertices.count - 1) {
val currentVertexId: VertexId = g2.vertices.filter(!_._2._1)
.fold((0L, (false, Double.MaxValue, List[VertexId]())))(
(a, b) => if (a._2._2 < b._2._2) a else b)._1
val newDistances: VertexRDD[(Double, List[VertexId])] =
g2.aggregateMessages[(Double, List[VertexId])](
ctx => if (ctx.srcId == currentVertexId) {
ctx.sendToDst((ctx.srcAttr._2 + ctx.attr, ctx.srcAttr._3 :+ ctx.srcId))
},
(a, b) => if (a._1 < b._1) a else b
)
g2 = g2.outerJoinVertices(newDistances)((vid, vd, newSum) => {
val newSumVal = newSum.getOrElse((Double.MaxValue, List[VertexId]()))
(
vd._1 || vid == currentVertexId,
math.min(vd._2, newSumVal._1),
if (vd._2 < newSumVal._1) vd._3 else newSumVal._2
)
})
}
g.outerJoinVertices(g2.vertices)((vid, vd, dist) =>
(vd, dist.getOrElse((false, Double.MaxValue, List[VertexId]()))
.productIterator.toList.tail
))
}
// Path Finding - random node from which to find all paths
现在,让我们测试一下:
val myVertices: RDD[(VertexId, String)] = sc.makeRDD(Array((1L, "A"), (2L, "B"), (3L, "C"), (4L, "D"), (5L, "E"), (6L, "F"), (7L, "G")))
val myEdges: RDD[Edge[Double]] = sc.makeRDD(Array(Edge(1L, 2L, 7.0), Edge(1L, 4L, 5.0), Edge(2L, 3L, 8.0), Edge(2L, 4L, 9.0), Edge(2L, 5L, 7.0), Edge(3L, 5L, 5.0), Edge(4L, 5L, 15.0), Edge(4L, 6L, 6.0),Edge(5L, 6L, 8.0), Edge(5L, 7L, 9.0), Edge(6L, 7L, 11.0)))
val my_graph = Graph(myVertices, myEdges).cache()
val v1 = 4000000028222916L
val results = dijkstra(my_graph, 1L).vertices.map(_._2).collect
// [CTRL-D]
// Exiting paste mode, now interpreting.
// [Lscala.Tuple2;@668a0785
// import org.apache.spark.graphx._
// myVertices: org.apache.spark.rdd.RDD[(org.apache.spark.graphx.VertexId, String)] = ParallelCollectionRDD[556] at makeRDD at <console>:37
// myEdges: org.apache.spark.rdd.RDD[org.apache.spark.graphx.Edge[Double]] = ParallelCollectionRDD[557] at makeRDD at <console>:39
// my_graph: org.apache.spark.graphx.Graph[String,Double] = org.apache.spark.graphx.impl.GraphImpl@49ea0d90
// dijkstra: [VD](g: org.apache.spark.graphx.Graph[VD,Double], origin: org.apache.spark.graphx.VertexId)org.apache.spark.graphx.Graph[(VD, List[Any]),Double]
// v1: Long = 4000000028222916
// results: Array[(String, List[Any])] = Array((A,List(0.0, List())), (B,List(7.0, List(1))), (C,List(15.0, Li...
scala> results.foreach(println)
// (A,List(0.0, List()))
// (B,List(7.0, List(1)))
// (C,List(15.0, List(1, 2)))
// (D,List(5.0, List(1)))
// (E,List(14.0, List(1, 2)))
// (F,List(11.0, List(1, 4)))
// (G,List(22.0, List(1, 4, 6)))
答案 1 :(得分:1)
请查看错误:
Exception in thread "main" java.lang.NoSuchMethodError: scala.runtime.ObjectRef.create
您的程序正在寻找create()
方法,但未获得此方法。
实际上,当scala和spark之间的版本不匹配时会发生这种类型的错误。
或强>
编译由一个版本完成,在运行时它使用另一个scala版本。
Scala版本差异:
如果您选中 Scala版本2.10.x
https://github.com/scala/scala/blob/2.10.x/src/library/scala/runtime/ObjectRef.java
你会得到,ObjectRef
上课
public T elem;
public ObjectRef(T elem) { this.elem = elem; }
public String toString() { return String.valueOf(elem); }
但如果你检查, Scala版本2.11.x https://github.com/scala/scala/blob/2.11.x/src/library/scala/runtime/ObjectRef.java
你会得到,ObjectRef
上课
public T elem;
public ObjectRef(T elem) { this.elem = elem; }
@Override
public String toString() { return String.valueOf(elem); }
public static <U> ObjectRef<U> create(U e) { return new ObjectRef<U>(e); } // Your program wants to get this method, but not getting this.
public static ObjectRef<Object> zero() { return new ObjectRef<Object>(null); }
这里有 create()
方法。
所以请确认,他们从相同版本的Scala获得编译和运行时。
请检查:NoSuchMethodError for Scala Seq line in Spark
在这里,Robert Horvick也在这个评论中告诉了
Spark 1.6基于Scala 2.10而不是2.11
因此,您可以从此处更新Spark版本http://spark.apache.org/downloads.html
最新稳定版是2016年10月3日发布的Apache Spark 2.0.1