插入DB时,独立Spark任务挂起

时间:2016-03-26 00:42:56

标签: scala jdbc apache-spark insertion

我有一个独立的spark 1.4.1作业,它在我通过spark-submit提交的Red Hat盒子上运行,有时在从RDD插入数据时会挂起。我已关闭连接上的自动提交并批量插入提交事务。日志在挂起之前显示的内容:

16/03/25 14:00:05 INFO Executor: Finished task 3.0 in stage 138.0 (TID 915). 1847 bytes result sent to driver
16/03/25 14:00:05 DEBUG AkkaRpcEnv$$anonfun$actorRef$lzycompute$1$1$$anon$1: [actor] received message AkkaMessage(StatusUpdate(915,FINISHED,java.nio.HeapByteBuffer[pos=0 lim=1847 cap=1
16/03/25 14:00:05 DEBUG AkkaRpcEnv$$anonfun$actorRef$lzycompute$1$1$$anon$1: Received RPC message: AkkaMessage(StatusUpdate(915,FINISHED,java.nio.HeapByteBuffer[pos=0 lim=1847 cap=1847
16/03/25 14:00:05 DEBUG TaskSchedulerImpl: parentName: , name: TaskSet_138, runningTasks: 1
16/03/25 14:00:05 DEBUG AkkaRpcEnv$$anonfun$actorRef$lzycompute$1$1$$anon$1: [actor] handled message (0.118 ms) AkkaMessage(StatusUpdate(915,FINISHED,java.nio.HeapByteBuffer[pos=621 li
16/03/25 14:00:05 INFO TaskSetManager: Finished task 3.0 in stage 138.0 (TID 915) in 7407 ms on localhost (23/24)
16/03/25 14:00:05 TRACE DAGScheduler: Checking for newly runnable parent stages
16/03/25 14:00:05 TRACE DAGScheduler: running: Set(ResultStage 138)
16/03/25 14:00:05 TRACE DAGScheduler: waiting: Set()
16/03/25 14:00:05 TRACE DAGScheduler: failed: Set()
16/03/25 14:00:10 DEBUG AkkaRpcEnv$$anonfun$actorRef$lzycompute$1$1$$anon$1: [actor] received message AkkaMessage(Heartbeat(driver,[Lscala.Tuple2;@7ed52306,BlockManagerId(driver, local
16/03/25 14:00:10 DEBUG AkkaRpcEnv$$anonfun$actorRef$lzycompute$1$1$$anon$1: Received RPC message: AkkaMessage(Heartbeat(driver,[Lscala.Tuple2;@7ed52306,BlockManagerId(driver, localhos
16/03/25 14:00:10 DEBUG AkkaRpcEnv$$anonfun$actorRef$lzycompute$1$1$$anon$1: [actor] handled message (0.099 ms) AkkaMessage(Heartbeat(driver,[Lscala.Tuple2;@7ed52306,BlockManagerId(dri

然后它只是间歇地重复最后3行:

16/03/25 14:01:04 TRACE HeartbeatReceiver: Checking for hosts with no recent heartbeats in HeartbeatReceiver. 

由于这些计算机上存在一些防火墙问题,我无法查看Web UI。我注意到的是,当我插入1000个批次而不是100个时,这个问题更为普遍。这是scala代码,它看起来是罪魁祸首。

//records should have up to INSERT_BATCH_SIZE entries
private def insertStuff(records: Seq[(String, (String, Stuff1, Stuff2, Stuff3))]) {
if (!records.isEmpty) {
  //get statement used for insertion (instantiated in an array of statements)
  val stmt = stuffInsertArray(//stuff)
  logger.info("Starting insertions on stuff" + table + " for " + time + " with " + records.length + " records")
  try {
    records.foreach(record => {
      //get vals from record
      ...
      //perform sanity checks
      if (//validate stuff)
      {
        //log stuff because it didn't validate
      }
      else
      {
        stmt.setInt(1, //stuff)
        stmt.setLong(2, //stuff)
        ...
        stmt.addBatch()
      }
    })

    //check if connection is still valid
    if (!connInsert.isValid(VALIDATE_CONNECTION_TIMEOUT))
    {
      logger.error("Insertion connection is not valid while inserting stuff.")
      throw new RuntimeException(s"Insertion connection not valid while inserting stuff.")
    }

    logger.debug("Stuff insertion executing batch...")
    stmt.executeBatch()
    logger.debug("Stuff insertion execution complete. Committing...")
    //commit insert batch. Either INSERT_BATCH_SIZE insertions planned or the last batch to be done
    insertCommit() //this does the commit and resets some counters
    logger.debug("stuff insertion commit complete.")
  } catch {
    case e: Exception => throw new RuntimeException(s"insertStuff exception  ${e.getMessage}")
  }
}

}

以下是它的调用方式:

    //stuffData is an RDD
    stuffData.foreachPartition(recordIt => {
      //new instance of the object of whose member function we're currently in
      val obj = new Obj(clusterInfo)
      recordIt.grouped(INSERT_BATCH_SIZE).foreach(records => obj.insertStuff(records))
    })

我放入的所有额外的日志记录和连接检查只是为了隔离问题但是因为我为每批插入编写,所以日志变得复杂。如果我序列化插入,问题仍然存在。知道为什么最后一项任务(24项)没有完成?感谢。

0 个答案:

没有答案