Spark Streaming不是流式传输,而是等待显示使用者配置值

时间:2017-06-09 09:44:23

标签: spark-streaming

我正在尝试使用spark-streaming-kafka-0-10_2.11工件和2.1.1版本以及spark-streaming_2.11版本2.1.1和kafka_2.11(0.10.2.1版本)来传输数据。

当我启动程序时,Spark不会传输数据,也不会抛出任何错误。

以下是我的代码,依赖项和响应。

依赖关系:

 "org.apache.kafka" % "kafka_2.11" % "0.10.2.1",
 "org.apache.spark" % "spark-streaming-kafka-0-10_2.11" % "2.1.1",
 "org.apache.spark" % "spark-streaming_2.11" % "2.1.1"

代码:

val sparkConf = new SparkConf().setMaster("local[*]").setAppName("kafka-dstream").set("spark.driver.allowMultipleContexts", "true")
val sc = new SparkContext(sparkConf)
val ssc = new StreamingContext(sc, Seconds(5))

val kafkaParams = Map[String, Object](
      "bootstrap.servers" -> "ec2-xx-yy-zz-abc.us-west-2.compute.amazonaws.com:9092",
      "key.deserializer" -> classOf[StringDeserializer],
      "value.deserializer" -> classOf[StringDeserializer],
      "group.id" -> ("java-test-consumer-" + random.nextInt()+"-" + 
System.currentTimeMillis()),
      "auto.offset.reset" -> "latest",
      "enable.auto.commit" -> (false: java.lang.Boolean)
    )

val topics = Array("test")
val stream = KafkaUtils.createDirectStream[String, String](
    ssc,
    PreferConsistent,
    Subscribe[String, String](topics, kafkaParams)
)
stream.foreachRDD(f=>f.foreachPartition(f=>f ->{
                    while (f.hasNext) {                            
  System.out.println(">>>>>>>>>>>>>>>>>>>>>>>>>>>"+f.next().key());
    }
}))

ssc.start()
ssc.awaitTermination()

响应(日志):

Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; support was removed in 8.0
[info] Loading project definition from D:\Scala_Workspace\LM\project
[info] Set current project to LM (in build file:/D:/Scala_Workspace/LM/)
[info] Running consumer.One
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
17/06/09 12:43:44 INFO SparkContext: Running Spark version 2.1.1
17/06/09 12:43:44 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
17/06/09 12:43:44 INFO SecurityManager: Changing view acls to: ee208961
17/06/09 12:43:44 INFO SecurityManager: Changing modify acls to: ee208961
17/06/09 12:43:44 INFO SecurityManager: Changing view acls groups to:
17/06/09 12:43:44 INFO SecurityManager: Changing modify acls groups to:
17/06/09 12:43:44 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users  with view permissions: Set(ee208961); groups wi
 permissions: Set(ee208961); groups with modify permissions: Set()
17/06/09 12:43:45 INFO Utils: Successfully started service 'sparkDriver' on port 51695.
17/06/09 12:43:45 INFO SparkEnv: Registering MapOutputTracker
17/06/09 12:43:45 INFO SparkEnv: Registering BlockManagerMaster
17/06/09 12:43:45 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
17/06/09 12:43:45 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
17/06/09 12:43:45 INFO DiskBlockManager: Created local directory at C:\Users\EE208961\AppData\Local\Temp\blockmgr-7f3c43f0-d1f5-43fb-b185-a5bf78de4496
17/06/09 12:43:45 INFO MemoryStore: MemoryStore started with capacity 93.3 MB
17/06/09 12:43:45 INFO SparkEnv: Registering OutputCommitCoordinator
17/06/09 12:43:45 INFO Utils: Successfully started service 'SparkUI' on port 4040.
17/06/09 12:43:46 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://10.1.22.70:4040
17/06/09 12:43:46 INFO Executor: Starting executor ID driver on host localhost
17/06/09 12:43:46 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 51706.
17/06/09 12:43:46 INFO NettyBlockTransferService: Server created on 10.1.22.70:51706
17/06/09 12:43:46 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
17/06/09 12:43:46 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, 10.1.22.70, 51706, None)
17/06/09 12:43:46 INFO BlockManagerMasterEndpoint: Registering block manager 10.1.22.70:51706 with 93.3 MB RAM, BlockManagerId(driver, 10.1.22.70, 51706,
17/06/09 12:43:46 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 10.1.22.70, 51706, None)
17/06/09 12:43:46 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, 10.1.22.70, 51706, None)
17/06/09 12:43:46 WARN KafkaUtils: overriding enable.auto.commit to false for executor
17/06/09 12:43:46 WARN KafkaUtils: overriding auto.offset.reset to none for executor
17/06/09 12:43:46 WARN KafkaUtils: overriding executor group.id to spark-executor-sadad
17/06/09 12:43:46 WARN KafkaUtils: overriding receive.buffer.bytes to 65536 see KAFKA-3135
17/06/09 12:43:47 INFO DirectKafkaInputDStream: Slide time = 5000 ms
17/06/09 12:43:47 INFO DirectKafkaInputDStream: Storage level = Serialized 1x Replicated
17/06/09 12:43:47 INFO DirectKafkaInputDStream: Checkpoint interval = null
17/06/09 12:43:47 INFO DirectKafkaInputDStream: Remember interval = 5000 ms
17/06/09 12:43:47 INFO DirectKafkaInputDStream: Initialized and validated org.apache.spark.streaming.kafka010.DirectKafkaInputDStream@5cda65c2
17/06/09 12:43:47 INFO ForEachDStream: Slide time = 5000 ms
17/06/09 12:43:47 INFO ForEachDStream: Storage level = Serialized 1x Replicated
17/06/09 12:43:47 INFO ForEachDStream: Checkpoint interval = null
17/06/09 12:43:47 INFO ForEachDStream: Remember interval = 5000 ms
17/06/09 12:43:47 INFO ForEachDStream: Initialized and validated org.apache.spark.streaming.dstream.ForEachDStream@1b9c49ce
17/06/09 12:43:47 INFO ConsumerConfig: ConsumerConfig values:
        metric.reporters = []
        metadata.max.age.ms = 300000
        partition.assignment.strategy = [org.apache.kafka.clients.consumer.RangeAssignor]
        reconnect.backoff.ms = 50
        sasl.kerberos.ticket.renew.window.factor = 0.8
        max.partition.fetch.bytes = 1048576
        bootstrap.servers = [ec2-54-69-23-196.us-west-2.compute.amazonaws.com:9092]
        ssl.keystore.type = JKS
        enable.auto.commit = false
        sasl.mechanism = GSSAPI
        interceptor.classes = null
        exclude.internal.topics = true
        ssl.truststore.password = null
        client.id =
        ssl.endpoint.identification.algorithm = null
        max.poll.records = 2147483647
        check.crcs = true
        request.timeout.ms = 40000
        heartbeat.interval.ms = 3000
        auto.commit.interval.ms = 5000
        receive.buffer.bytes = 65536
        ssl.truststore.type = JKS
        ssl.truststore.location = null
        ssl.keystore.password = null
        fetch.min.bytes = 1
        send.buffer.bytes = 131072
        value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
        group.id = sadad
        retry.backoff.ms = 100
        sasl.kerberos.kinit.cmd = /usr/bin/kinit
        sasl.kerberos.service.name = null
        sasl.kerberos.ticket.renew.jitter = 0.05
        ssl.trustmanager.algorithm = PKIX
        ssl.key.password = null
        fetch.max.wait.ms = 500
        sasl.kerberos.min.time.before.relogin = 60000
        connections.max.idle.ms = 540000
        session.timeout.ms = 30000
        metrics.num.samples = 2
        key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
        ssl.protocol = TLS
        ssl.provider = null
        ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
        ssl.keystore.location = null
        ssl.cipher.suites = null
        security.protocol = PLAINTEXT
        ssl.keymanager.algorithm = SunX509
        metrics.sample.window.ms = 30000
        auto.offset.reset = latest

17/06/09 12:43:47 INFO AppInfoParser: Kafka version : 0.10.2.1
17/06/09 12:43:47 INFO AppInfoParser: Kafka commitId : a7a17cdec9eaa6c5

代码在这里开始等待,既不会抛出错误,也不会抛出流。

我观察到的是,如果我不使用“org.apache.kafka”%“kafka_2.11”%“0.10.2.1”依赖,我正在

"Exception in thread "main" org.apache.kafka.common.protocol.types.SchemaException: Error reading field 'topic_metadata': Error reading array of size 291941, only 32 bytes available" error,

所以我使用了kafka_2.11(0.10.2.1版本)依赖。

0 个答案:

没有答案