在spark

时间:2016-06-28 07:02:59

标签: scala apache-spark foreach dataframe

我编写了一个获取DataFrame的类,对其进行了一些计算并可以导出结果。数据帧由密钥列表生成。我知道我现在正以非常低效的方式做到这一点:

var l = List(34, 32, 132, 352)      // Scala List

l.foreach{i => 
    val data:DataFrame = DataContainer.getDataFrame(i) // get DataFrame
    val x = new MyClass(data)                     // initialize MyClass with new Object
    x.setSettings(...)
    x.calcSomething()
    x.saveResults()                               // writes the Results into another Dataframe that is saved to HDFS
}

我认为Scala列表上的foreach不是平行的,所以我怎么能避免在这里使用foreach? DataFrames的计算可以并行进行,因为计算结果不是为下一个DataFrame输入的 - 我该如何实现呢?

非常感谢!!

__编辑:

我试图做的事情:

val l = List(34, 32, 132, 352)      // Scala List
var l_DF:List[DataFrame] = List()
l.foreach{ i =>
    DataContainer.getDataFrame(i)::l        //append DataFrame to List of Dataframes
}

val rdd:DataFrame = sc.parallelize(l)
rdd.foreach(data =>
    val x = new MyClass(data)
)

但是给出了

Invalid tree; null:
null

编辑2: 好吧,我不知道evrything在引擎盖下是如何工作的......

1)当我在spark-shell

中执行时,一切正常
spark-shell –driver-memory 10g       
//...
var l = List(34, 32, 132, 352)      // Scala List

l.foreach{i => 
    val data:DataFrame = AllData.where($"a" === i) // get DataFrame
    val x = new MyClass(data)                     // initialize MyClass     with new Object
    x.calcSomething()
}

2)错误,当我用

开始时
spark-shell --master yarn-client --num-executors 10 –driver-memory 10g  
// same code as above
java.util.concurrent.RejectedExecutionException: Task scala.concurrent.impl.CallbackRunnable@7b600fed rejected from java.util.concurrent.ThreadPoolExecutor@1431127[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 1263]
    at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2047)
    at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:823)
    at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1369)
    at scala.concurrent.impl.ExecutionContextImpl$$anon$1.execute(ExecutionContextImpl.scala:133)
    at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:40)
    at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:248)
    at scala.concurrent.Promise$class.complete(Promise.scala:55)
    at scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:153)
    at scala.concurrent.Future$$anonfun$recover$1.apply(Future.scala:324)
    at scala.concurrent.Future$$anonfun$recover$1.apply(Future.scala:324)
    at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32)
    at org.spark-project.guava.util.concurrent.MoreExecutors$SameThreadExecutorService.execute(MoreExecutors.java:293)
    at scala.concurrent.impl.ExecutionContextImpl$$anon$1.execute(ExecutionContextImpl.scala:133)
    at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:40)
    at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:248)
    at scala.concurrent.Promise$class.complete(Promise.scala:55)
    at scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:153)
    at scala.concurrent.Future$$anonfun$map$1.apply(Future.scala:235)
    at scala.concurrent.Future$$anonfun$map$1.apply(Future.scala:235)
    at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32)

3)当我尝试并行化时,我也会收到错误

spark-shell --master yarn-client --num-executors 10 –driver-memory 10g
//...
var l = List(34, 32, 132, 352).par
// same code as above, just parallelized before calling foreach
// i can see the parallel execution by the console messages (my class gives some and they are printed out parallel now instead of sequentielly

scala.collection.parallel.CompositeThrowable: Multiple exceptions thrown during a parallel computation: java.lang.IllegalStateException: SparkContext has been shutdown
org.apache.spark.SparkContext.runJob(SparkContext.scala:1816)
org.apache.spark.SparkContext.runJob(SparkContext.scala:1837)
org.apache.spark.SparkContext.runJob(SparkContext.scala:1850)
org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:215)
    org.apache.spark.sql.execution.Limit.executeCollect(basicOperators.scala:207)
org.apache.spark.sql.DataFrame$$anonfun$collect$1.apply(DataFrame.scala:1385)
org.apache.spark.sql.DataFrame$$anonfun$collect$1.apply(DataFrame.scala:1385)
org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:56)
org.apache.spark.sql.DataFrame.withNewExecutionId(DataFrame.scala:1903)
org.apache.spark.sql.DataFrame.collect(DataFrame.scala:1384)
.
.
.

java.lang.IllegalStateException: Cannot call methods on a stopped SparkContext                  org.apache.spark.SparkContext.org$apache$spark$SparkContext$$assertNotStopped(SparkContext.scala:104)
 org.apache.spark.SparkContext.broadcast(SparkContext.scala:1320)
   org.apache.spark.sql.execution.datasources.DataSourceStrategy$.apply(DataSourceStrategy.scala:104)
org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:58)
org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:58)
scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)
org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:59)
org.apache.spark.sql.catalyst.planning.QueryPlanner.planLater(QueryPlanner.scala:54)
org.apache.spark.sql.execution.SparkStrategies$EquiJoinSelection$.makeBroadcastHashJoin(SparkStrategies.scala:92)
org.apache.spark.sql.execution.SparkStrategies$EquiJoinSelection$.apply(SparkStrategies.scala:104)

实际上有超过10个执行器,但有4个节点。我从不配置spark-context。它已在启动时提供。

3 个答案:

答案 0 :(得分:7)

您可以使用scala的parallel collections在驱动程序端实现foreach并行性。

val l = List(34, 32, 132, 352).par
l.foreach{i => // your code to be run in parallel for each i}

*但是,请注意:您的群集是否能够并行运行作业?您可以并行地将作业提交到spark集群,但它们最终可能会在集群中排队并按顺序执行。

答案 1 :(得分:0)

您可以使用scala的Future和Spark Fair Scheduling,例如

import scala.concurrent._
import scala.concurrent.duration._
import ExecutionContext.Implicits.global

object YourApp extends App { 
  val sc = ... // SparkContext, be sure to set spark.scheduler.mode=FAIR
  var pool = 0
  // this is to have different pools per job, you can wrap it to limit the no. of pools
  def poolId = {
    pool = pool + 1
    pool
  }
  def runner(i: Int) = Future {
    sc.setLocalProperty("spark.scheduler.pool", poolId)
    val data:DataFrame = DataContainer.getDataFrame(i) // get DataFrame
    val x = new MyClass(data)                     // initialize MyClass with new Object
    x.setSettings(...)
    x.calcSomething()
    x.saveResults()
  }

  val l = List(34, 32, 132, 352)      // Scala List
  val futures = l map(i => runner(i))

  // now you need to wait all your futures to be completed
  futures foreach(f => Await.ready(f, Duration.Inf))

}

使用FairScheduler和不同的池,每个并发作业将拥有相当一部分的spark群集资源。

有关scala未来here的一些参考。您可能需要在完成,成功和/或失败时添加必要的回调。

答案 2 :(得分:0)

我使用using List.par.foreach{object => print(object)}之类的方法进行了此操作。 我在Spark 2.3上使用Zeppelin。我有一个类似的用例,我需要每天获取数据并分别处理。由于我使用的表上存在某些连接条件,因此无法使用整个月的数据来完成。这是我的代码示例:

import java.time.LocalDate
import java.sql.Date

var start =  LocalDate.of(2019, 1, 1)
val end   =  LocalDate.of(2019, 2, 1)
var list : List[LocalDate] = List()

var usersDf = spark.read.load("s3://production/users/")
usersDf.createOrReplaceTempView("usersDf")

while (start.isBefore(end)){
    list = start :: list
    start = start.plusDays(1)
}

list.par.foreach{ loopDate =>
    //println(start)
    var yesterday = loopDate.plusDays(-1)
    var tomorrow = loopDate.plusDays(1)
    var lastDay = yesterday.getDayOfMonth()
    var lastMonth = yesterday.getMonthValue()
    var lastYear = yesterday.getYear()

    var day = loopDate.getDayOfMonth()
    var month = loopDate.getMonthValue()
    var year = loopDate.getYear()
    var dateDay = loopDate

    var condition: String = ""
    if (month == lastMonth) {
        condition = s"where year = $year and month = $month and day in ($day, $lastDay)"
    } else {
        condition = s"""where ((year = $year and month = $month and day = $day) or
        (year = $lastYear and month = $lastMonth and day = $lastDay)) 
        """
    }

    //Get events in local timezone
    var aggPbDf = spark.sql(s"""
            with users as (
            select * from users
            where account_creation_date < '$tomorrow'
        )
        , cte as (
            select e.* date(from_utc_timestamp(to_timestamp(concat(e.year,'-', e.month, '-', e.day, ' ', e.hour), 'yyyy-MM-dd HH'), coalesce(u.timezone_name, 'UTC'))) as local_date
            from events.user_events e
            left join users u
            on u.account_id = e.account_id
            $condition)
        select * from cte
        where local_date = '$dateDay'
    """
    )
    aggPbDf.write.mode("overwrite")
        .format("parquet")
        .save(s"s3://prod-bucket/events/local-timezone/date_day=$dateDay")
}

这将获取两天的数据,进行处理,然后仅写出所需的输出。在没有par的情况下运行此程序每天大约需要15分钟,而在使用par的情况下,整个月要花费1个小时。这也取决于您的Spark集群可以支持什么以及数据的大小。