从map函数内部调用的函数返回spark rdd时出错

时间:2019-05-04 05:19:32

标签: scala apache-spark hbase

我有一个来自hbase表的行键(如下所示的植物)集合,我想创建一个fetchData函数,该函数从集合中返回行键的rdd数据。目标是为植物集合中的每个元素从fetchData方法获取RDD的并集。我在下面给出了代码的相关部分。我的问题是,代码为fetchData的返回类型给出了编译错误:

  

println(“ PartB:” + hBaseRDD.getNumPartitions)

     

错误:值getNumPartitions不是Option [org.apache.spark.rdd.RDD [it.nerdammer.spark.test.sys.Record]]的成员

我正在使用Scala 2.11.8 spark 2.2.0和Maven编译

import it.nerdammer.spark.hbase._
import org.apache.spark.sql._
import org.apache.spark.sql.types.{StructType, StructField, StringType, IntegerType};
import org.apache.log4j.Level
import org.apache.log4j.Logger
import org.apache.spark.sql.SparkSession
import org.apache.spark.sql.functions._
object sys {
  case class systems( rowkey: String, iacp: Option[String], temp: Option[String])

  val spark = SparkSession.builder().appName("myApp").config("spark.executor.cores",4).getOrCreate()
  import spark.implicits._

  type Record = (String, Option[String], Option[String])

  def fetchData(plant: String): RDD[Record] = {
    val start_index = plant
    val end_index = plant + "z"
    //The below command works fine if I run it in main function, but to get multiple rows from hbase, I am using it in a separate function
    spark.sparkContext.hbaseTable[Record]("test_table").select("iacp","temp").inColumnFamily("pp").withStartRow(start_index).withStopRow(end_index)

  }

  def main(args: Array[String]) {
    //the below elements in the collection are prefix of relevant rowkeys in hbase table ("test_table") 
    val plants = Vector("a8","cu","aw","fx")
    val hBaseRDD = plants.map( pp => fetchData(pp))
    println("Part: "+ hBaseRDD.getNumPartitions)
    /*
      rest of the code
    */
  }

}

这是代码的有效版本。这里的问题是我正在使用for循环,我必须在每个循环中从HBase请求与行键(植物)向量相对应的数据,而不是先获取所有数据然后执行其余代码

    import it.nerdammer.spark.hbase._
    import org.apache.spark.sql._
    import org.apache.spark.sql.types.{StructType, StructField, StringType, IntegerType};
    import org.apache.log4j.Level
    import org.apache.log4j.Logger
    import org.apache.spark.sql.SparkSession
    import org.apache.spark.sql.functions._
    object sys {
      case class systems( rowkey: String, iacp: Option[String], temp: Option[String])
      def main(args: Array[String]) {

        val spark = SparkSession.builder().appName("myApp").config("spark.executor.cores",4).getOrCreate()
        import spark.implicits._

        type Record = (String, Option[String], Option[String])
        val plants = Vector("a8","cu","aw","fx")

        for (plant <- plants){
          val start_index = plant
          val end_index = plant + "z"
          val hBaseRDD = spark.sparkContext.hbaseTable[Record]("test_table").select("iacp","temp").inColumnFamily("pp").withStartRow(start_index).withStopRow(end_index)
          println("Part: "+ hBaseRDD.getNumPartitions)
          /*
            rest of the code
          */
        }
      }
    }

尝试之后,这就是我现在遇到的问题。那么如何将类型转换为必填项。

scala>   def fetchData(plant: String) = {
     |     val start_index = plant
     |     val end_index = plant + "~"
     |     val x1 = spark.sparkContext.hbaseTable[Record]("test_table").select("iacp","temp").inColumnFamily("pp").withStartRow(start_index).withStopRow(end_index)
     |     x1
     |   }

在REPL中定义功能并运行

scala> val hBaseRDD = plants.map( pp => fetchData(pp)).reduceOption(_ union _)
<console>:39: error: type mismatch;
 found   : org.apache.spark.rdd.RDD[(String, Option[String], Option[String])]
 required: it.nerdammer.spark.hbase.HBaseReaderBuilder[(String, Option[String], Option[String])]
       val hBaseRDD = plants.map( pp => fetchData(pp)).reduceOption(_ union _)

预先感谢!

1 个答案:

答案 0 :(得分:3)

hBaseRDD的类型为Vector[_],而不是RDD[_],因此您无法在其上执行方法getNumPartitions。如果我理解正确,则希望合并获取的RDD。您可以通过plants.map( pp => fetchData(pp)).reduceOption(_ union _)进行操作(我建议使用reduceOption,因为它不会在空列表上失败,但是如果您确信列表不为空,则可以使用reduce

另外,返回的fetchData类型是RDD[U],但是我没有找到U的任何定义。可能这就是编译器推断Vector[Nothing]而不是Vector[RDD[Record]]的原因。为了避免后续错误,您还应该将RDD[U]更改为RDD[Record]