为什么Spark Streaming不能从Kafka主题中读取?

时间:2017-04-21 08:30:24

标签: amazon-ec2 apache-kafka spark-streaming apache-spark-1.6

  • Spark Streaming 1.6.0
  • Apache Kafka 10.0.1

我使用Spark Streaming从sample主题中读取。代码运行时没有错误或异常,但我没有通过print()方法在控制台上获得任何数据。

我检查了主题中是否有消息:

./bin/kafka-console-consumer.sh \
    --zookeeper ip-172-xx-xx-xxx:2181 \
    --topic sample \
    --from-beginning

我收到的消息是:

message no. 1
message no. 2
message no. 3
message no. 4
message no. 5

运行流媒体作业的命令:

./bin/spark-submit \
    --conf "spark.executor.extraJavaOptions=-XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:MaxDirectMemorySize=512m" \
    --jars /home/ubuntu/zifferlabs/target/ZifferLabs-1-jar-with-dependencies.jar \
    --class "com.zifferlabs.stream.SampleStream" \
    /home/ubuntu/zifferlabs/src/main/java/com/zifferlabs/stream/SampleStream.java

以下是整个代码:

import java.util.Arrays;
import java.util.HashMap;
import java.util.HashSet;
import java.util.Map;
import java.util.Set;

import org.apache.spark.SparkConf;
import org.apache.spark.api.java.function.Function;
import org.apache.spark.streaming.Duration;
import org.apache.spark.streaming.api.java.JavaDStream;
import org.apache.spark.streaming.api.java.JavaPairInputDStream;
import org.apache.spark.streaming.api.java.JavaStreamingContext;
import org.apache.spark.streaming.kafka.KafkaUtils;

import kafka.serializer.DefaultDecoder;
import kafka.serializer.StringDecoder;
import scala.Tuple2;

public class SampleStream {
  private static void processStream() {
    SparkConf conf = new SparkConf().setAppName("sampleStream")
            .setMaster("local[3]")
            .set("spark.serializer", "org.apache.spark.serializer.JavaSerializer")
            .set("spark.driver.memory", "2g").set("spark.streaming.blockInterval", "1000")
            .set("spark.driver.allowMultipleContexts", "true")
            .set("spark.scheduler.mode", "FAIR");

    JavaStreamingContext jsc = new JavaStreamingContext(conf, new Duration(Long.parseLong("2000")));

    String[] topics = "sample".split(",");
    Set<String> topicSet = new HashSet<String>(Arrays.asList(topics));
    Map<String, String> props = new HashMap<String, String>();
    props.put("metadata.broker.list", "ip-172-xx-xx-xxx:9092");
    props.put("kafka.consumer.id", "sample_con");
    props.put("group.id", "sample_group");
    props.put("zookeeper.connect", "ip-172-xx-xx-xxx:2181");
    props.put("zookeeper.connection.timeout.ms", "16000");

    JavaPairInputDStream<String, byte[]> kafkaStream =
      KafkaUtils.createDirectStream(jsc, String.class, byte[].class, StringDecoder.class,
                                    DefaultDecoder.class, props, topicSet);

    JavaDStream<String> data = kafkaStream.map(new Function<Tuple2<String,byte[]>, String>() {
      public String call(Tuple2<String, byte[]> arg0) throws Exception {
        System.out.println("$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$ value is: " + arg0._2().toString());
        return arg0._2().toString();
      }
    });

    data.print();

    System.out.println("Spark Streaming started....");
    jsc.checkpoint("/home/spark/sparkChkPoint");
    jsc.start();
    jsc.awaitTermination();
    System.out.println("Stopped Spark Streaming");
  }

  public static void main(String[] args) {
    processStream();
  }
}

1 个答案:

答案 0 :(得分:0)

认为您已正确使用代码,但执行它的命令行不正确。

spark-submit该应用程序如下(为简单起见,格式化&#39; s mine + spark.executor.extraJavaOptions已删除):

./bin/spark-submit \
  --jars /home/ubuntu/zifferlabs/target/ZifferLabs-1-jar-with-dependencies.jar \
  --class "com.zifferlabs.stream.SampleStream" \
  /home/ubuntu/zifferlabs/src/main/java/com/zifferlabs/stream/SampleStream.java

认为因为spark-submit提交您的Java 源代码而不是可执行代码而无法工作。

spark-submit您的申请表如下:

./bin/spark-submit \
  --class "com.zifferlabs.stream.SampleStream" \
  /home/ubuntu/zifferlabs/target/ZifferLabs-1-jar-with-dependencies.jar

--class来定义&#34;入口点&#34;到您的Spark应用程序和具有依赖项的代码(作为spark-submit的唯一输入参数)。

试一试并报告回来!