Spark和Scala:将函数应用于RDD

时间:2018-01-15 10:47:17

标签: scala apache-spark

我有一个VertexRDD的RDD [(VertexId,Long)]结构如下:

(533, 1)
(571, 2)
(590, 0)
...

其中,每个元素由顶点id(533,571,590等)及其输出边数(1,2,0等)组成。

我想将函数应用于此RDD的每个元素。此功能必须在输出边数和4个阈值之间进行比较。

如果输出边的数量小于或等于4个阈值中的一个,则必须将相应的顶点id插入到Array(或其他类似的数据结构)中,以便在最后获得4个数据结构,每个数据结构包含顶点的id,满足与相应阈值的比较。

我需要在同一数据结构中积累满足相同阈值比较的ID。如何与SparkScala

并行并实施此方法

我的代码:

val usersGraphQuery = "MATCH (u1:Utente)-[p:PIU_SA_DI]->(u2:Utente) RETURN id(u1), id(u2), type(p)"
val usersGraph = neo.rels(usersGraphQuery).loadGraph[Any, Any]
val numUserGraphNodes = usersGraph.vertices.count
val numUserGraphEdges = usersGraph.edges.count
val maxNumOutDegreeEdgesPerNode = numUserGraphNodes - 1

// get id and number of outgoing edges of each node from the graph
// except those that have 0 outgoing edges (default behavior of the outDegrees API)
var userNodesOutDegreesRdd: VertexRDD[Int] = usersGraph.outDegrees

/* userNodesOutDegreesRdd.foreach(println) 
 * Now you can see 
 *  (533, 1)
 *  (571, 2)
 */

// I also get ids of nodes with zero outgoing edges
var fixedGraph: Graph[Any, Any] = usersGraph.outerJoinVertices(userNodesOutDegreesRdd)( (vid: Any, defaultOutDegrees: Any, outDegOpt: Option[Any]) => outDegOpt.getOrElse(0L) )
var completeUserNodesOutDregreesRdd = fixedGraph.vertices

/* completeUserNodesOutDregreesRdd.foreach(println) 
* Now you can see 
*  (533, 1)
*  (571, 2)
*  (590, 0) <--
*/

// 4 thresholds that identify the 4 clusters of User nodes based on the number of their outgoing edges 
var soglia25: Double = (maxNumOutDegreeEdgesPerNode.toDouble/100)*25
var soglia50: Double = (maxNumOutDegreeEdgesPerNode.toDouble/100)*50
var soglia75: Double = (maxNumOutDegreeEdgesPerNode.toDouble/100)*75
var soglia100: Double = maxNumOutDegreeEdgesPerNode
println("soglie: "+soglia25+", "+soglia50+", "+soglia75+", "+soglia100)

// containers of individual clusters
var lowSAUsers = new ListBuffer[(Long, Any)]()
var mediumLowSAUsers = new ListBuffer[(Long, Any)]()
var mediumHighSAUsers = new ListBuffer[(Long, Any)]()
var highSAUsers = new ListBuffer[(Long, Any)]()
// overall container of the 4 clusters
var clustersContainer = new ListBuffer[ (String, ListBuffer[(Long, Any)]) ]()

// I WANT PARALLEL FROM HERE -----------------------------------------------
// from RDD to Array
var completeUserNodesOutDregreesArray = completeUserNodesOutDregreesRdd.take(numUserGraphNodes.toInt)

// analizzo ogni nodo Utente e lo assegno al cluster di appartenenza
for(i<-0 to numUserGraphNodes.toInt-1) { 
  // confronto il valore del numero di archi in uscita (convertito in stringa) 
  // con le varie soglie per determinare in quale classe inserire il relativo nodo Utente 
  if( (completeUserNodesOutDregreesArray(i)._2).toString().toLong <= soglia25 ) {
    println("ok soglia25 ")
    lowSAUsers += completeUserNodesOutDregreesArray(i)
  }else if( (completeUserNodesOutDregreesArray(i)._2).toString().toLong <= soglia50 ){
    println("ok soglia50 ")
    mediumLowSAUsers += completeUserNodesOutDregreesArray(i)
  }else if( (completeUserNodesOutDregreesArray(i)._2).toString().toLong <= soglia75 ){
    println("ok soglia75 ")
    mediumHighSAUsers += completeUserNodesOutDregreesArray(i)
  }else if( (completeUserNodesOutDregreesArray(i)._2).toString().toLong <= soglia100 ){
    println("ok soglia100 ")
    highSAUsers += completeUserNodesOutDregreesArray(i)
  }

} 

// I put each cluster in the final container
clustersContainer += Tuple2("lowSAUsers", lowSAUsers)
clustersContainer += Tuple2("mediumLowSAUsers", mediumLowSAUsers)
clustersContainer += Tuple2("mediumHighSAUsers", mediumHighSAUsers)
clustersContainer += Tuple2("highSAUsers", highSAUsers)

/* clustersContainer.foreach(println) 
 * Now you can see 
 * (lowSAUsers,ListBuffer((590,0)))
 * (mediumLowSAUsers,ListBuffer((533,1)))
 * (mediumHighSAUsers,ListBuffer())
 * (highSAUsers,ListBuffer((571,2)))
 */

// ---------------------------------------------------------------------

1 个答案:

答案 0 :(得分:1)

如何创建代表不同bin的元组数组:

val bins = Seq(0, soglia25, soglia50, soglia75, soglia100).sliding(2)
    .map(seq => (seq(0), seq(1))).toArray

然后,对于RDD的每个元素,您会找到相应的bin,将其设为键,将id转换为Seq并按键减少:

def getBin(bins: Array[(Double, Double)], value: Int): Int = { 
   bins.indexWhere {case (a: Double, b: Double) => a < value && b >= value} 
}
userNodesOutDegreesRdd.map { 
    case (id, value) => (getBin(bins, value), Seq(id))
}.reduceByKey(_ ++ _)