Spark Streaming工作不会安排额外的工作

时间:2017-11-06 15:05:49

标签: scala azure apache-spark spark-streaming azure-eventhub

为Hadoop 2.7.3构建的Spark 2.1.1

Scala 2.11.11

群集有3个Linux RHEL 7.3 Azure VM,运行Spark Standalone Deploy Mode(尚无YARN或Mesos)

我使用用Scala编写的IntelliJ创建了一个非常简单的SparkStreaming作业。我正在使用Maven并将作业构建为包含所有依赖项的胖/超级jar。

当我在本地运行作业时,它工作正常。如果我将jar复制到集群并使用本地[2]的主机运行它也可以正常工作。但是,如果我将作业提交给集群主机,就好像它不希望在第一个任务之外安排其他工作。作业启动,然后抓取Azure事件中心中的许多事件,成功处理它们,然后再也不再工作了。如果我将作业仅仅作为应用程序提交给主人,或者如果使用监督集群模式提交作业,则无关紧要。两者都做同样的事情。

我查看了我所知道的所有日志(主,驱动程序(如果适用)和执行程序),我没有看到任何似乎可行的错误或警告。我已经改变了日志级别,如下所示,显示了ALL / INFO / DEBUG并筛选了这些日志而没有发现任何看似相关的东西。

值得注意的是,我之前创建了多个连接到Kafka的作业,而不是Azure事件中心,使用Java和那些在监督群集模式下运行的作业,在同一个群集上没有问题。这让我相信群集配置不是问题,它可能是我的代码(下面)或Azure事件中心。

关于我可能会在哪里检查以解决此问题的任何想法?这是我简单工作的代码。

提前致谢。

注意:conf。{name}表示我从配置文件加载的值。我已经测试了加载和硬编码,两者都有相同的结果。

package streamingJob

import org.apache.spark.{SparkConf, SparkContext}
import org.apache.spark.streaming.{Seconds, StreamingContext}
import org.apache.spark.streaming.eventhubs.EventHubsUtils
import org.joda.time.DateTime

object TestJob {

  def main(args: Array[String]): Unit = {
    val sparkConf = new SparkConf()
    sparkConf.setAppName("TestJob")

    // Uncomment to run locally
    //sparkConf.setMaster("local[2]")

    val sparkContext = new SparkContext(sparkConf)
    sparkContext.setLogLevel("ERROR")

    val streamingContext: StreamingContext = new StreamingContext(sparkContext, Seconds(1))

    val readerParams = Map[String, String] (
      "eventhubs.policyname" -> conf.policyname,
      "eventhubs.policykey" -> conf.policykey,
      "eventhubs.namespace" -> conf.namespace,
      "eventhubs.name" -> conf.name,
      "eventhubs.partition.count" -> conf.partitionCount,
      "eventhubs.consumergroup" -> conf.consumergroup
    )

    val eventData = EventHubsUtils.createDirectStreams(
      streamingContext,
      conf.namespace,
      conf.progressdir,
      Map("name" -> readerParams))

    eventData.foreachRDD(r => {
      r.foreachPartition { p => {
        p.foreach(d => {
          println(DateTime.now()  + ": " + d)
        }) // end of EventData
      }} // foreachPartition
    }) // foreachRDD

    streamingContext.start()
    streamingContext.awaitTermination()
  }
}

以下是我作为应用程序运行时的一组日志,而非集群/监督。

/spark/bin/spark-submit --class streamingJob.TestJob --master spark://{ip}:7077 --total-executor-cores 1 /spark/job-files/fatjar.jar

Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
17/11/06 17:52:04 INFO SparkContext: Running Spark version 2.1.1
17/11/06 17:52:05 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
17/11/06 17:52:05 INFO SecurityManager: Changing view acls to: root
17/11/06 17:52:05 INFO SecurityManager: Changing modify acls to: root
17/11/06 17:52:05 INFO SecurityManager: Changing view acls groups to:
17/11/06 17:52:05 INFO SecurityManager: Changing modify acls groups to:
17/11/06 17:52:05 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users  with view permissions: Set(root); groups with view permissions: Set(); users  with modify permissions: Set(root); groups with modify permissions: Set()
17/11/06 17:52:06 INFO Utils: Successfully started service 'sparkDriver' on port 44384.
17/11/06 17:52:06 INFO SparkEnv: Registering MapOutputTracker
17/11/06 17:52:06 INFO SparkEnv: Registering BlockManagerMaster
17/11/06 17:52:06 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
17/11/06 17:52:06 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
17/11/06 17:52:06 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-b5e2c0f3-2500-42c6-b057-cf5d368580ab
17/11/06 17:52:06 INFO MemoryStore: MemoryStore started with capacity 366.3 MB
17/11/06 17:52:06 INFO SparkEnv: Registering OutputCommitCoordinator
17/11/06 17:52:06 INFO Utils: Successfully started service 'SparkUI' on port 4040.
17/11/06 17:52:06 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://{ip}:4040
17/11/06 17:52:06 INFO SparkContext: Added JAR file:/spark/job-files/fatjar.jar at spark://{ip}:44384/jars/fatjar.jar with timestamp 1509990726989
17/11/06 17:52:07 INFO StandaloneAppClient$ClientEndpoint: Connecting to master spark://{ip}:7077...
17/11/06 17:52:07 INFO TransportClientFactory: Successfully created connection to /{ip}:7077 after 72 ms (0 ms spent in bootstraps)
17/11/06 17:52:07 INFO StandaloneSchedulerBackend: Connected to Spark cluster with app ID app-20171106175207-0000
17/11/06 17:52:07 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 44624.
17/11/06 17:52:07 INFO NettyBlockTransferService: Server created on {ip}:44624
17/11/06 17:52:07 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
17/11/06 17:52:07 INFO StandaloneAppClient$ClientEndpoint: Executor added: app-20171106175207-0000/0 on worker-20171106173151-{ip}-46086 ({ip}:46086) with 1 cores
17/11/06 17:52:07 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, {ip}, 44624, None)
17/11/06 17:52:07 INFO StandaloneSchedulerBackend: Granted executor ID app-20171106175207-0000/0 on hostPort {ip}:46086 with 1 cores, 1024.0 MB RAM
17/11/06 17:52:07 INFO BlockManagerMasterEndpoint: Registering block manager {ip}:44624 with 366.3 MB RAM, BlockManagerId(driver, {ip}, 44624, None)
17/11/06 17:52:07 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, {ip}, 44624, None)
17/11/06 17:52:07 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, {ip}, 44624, None)
17/11/06 17:52:07 INFO StandaloneAppClient$ClientEndpoint: Executor updated: app-20171106175207-0000/0 is now RUNNING
17/11/06 17:52:08 INFO StandaloneSchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.0

0 个答案:

没有答案