我使用Spark Job编写了以下代码来使用数据,
在获取检索后流式传输kafka或处理数据是否有任何缺失?如何测试是否检索数据?
// StreamingExamples.setStreamingLogLevels();
SparkConf sparkConf = new SparkConf().setAppName("JavaKafkaWordCount").setMaster("local[*]");
;
// Create the context with 2 seconds batch size
JavaStreamingContext jssc = new JavaStreamingContext(sparkConf, new Duration(1000));
Map<String, Integer> topicMap = new HashMap<>();
topicMap.put("Ptopic", 1);
JavaPairReceiverInputDStream<String, String> messages = KafkaUtils.createStream(jssc, "localhost:2181", "5",
topicMap);
/*messages.foreach(new Function<JavaRDD<String, String>, Void>() {
public Void call(JavaRDD<String, String> accessLogs) {
return null;
}}
);*/
JavaDStream<String> lines = messages.map(new Function<Tuple2<String, String>, String>() {
@Override
public String call(Tuple2<String, String> tuple2) {
/*System.out.println(tuple2._1().toString());
System.out.println(tuple2._2().toString());*/
return tuple2._2();
}
});
lines.print();
jssc.start();
jssc.awaitTermination();
这里的结果只是打印..
答案 0 :(得分:1)
您可以使用map,reduce,flatmap等基本的大数据功能。
更新1:
JavaDStream<String> lines = messages.map(new Function<Tuple2<String, String>, String>() {
@Override
public String call(Tuple2<String, String> tuple2) {
/*System.out.println(tuple2._1().toString());
System.out.println(tuple2._2().toString());*/
return tuple2._2();
}
});
// TODO: make some transformation here:
lines = lines.map(x -> { // clean data
String callType = x.getCallType().replaceAll("\"", "").replaceAll("[-|,]", ""); // here some operations
x.setCallType(callType);
return x;
}).filter(pair -> { // filter data
return !isFilteredOnFire || pair.getCallType().matches("(?i).*\\bFire\\b.*"); // here so filters
});
lines.print();
jssc.start();
jssc.awaitTermination();
中描述了完整的示例