我尝试从spark(使用Java)流向安全的Kafka(使用SASL PLAINTEXT机制)时遇到此错误。
更详细的错误消息:
17/07/07 14:38:43 INFO SimpleConsumer: Reconnect due to socket error: java.io.EOFException: Received -1 when reading from a channel, the socket has likely been closed.
Exception in thread "main" org.apache.spark.SparkException: java.io.EOFException: Received -1 when reading from channel, socket has likely been closed.
at org.apache.spark.streaming.kafka.KafkaCluster$$anonfun$checkErrors$1.apply(KafkaCluster.scala:366)
at org.apache.spark.streaming.kafka.KafkaCluster$$anonfun$checkErrors$1.apply(KafkaCluster.scala:366)
at scala.util.Either.fold(Either.scala:98)
at org.apache.spark.streaming.kafka.KafkaCluster$.checkErrors(KafkaCluster.scala:365)
at org.apache.spark.streaming.kafka.KafkaUtils$.getFromOffsets(KafkaUtils.scala:222)
at org.apache.spark.streaming.kafka.KafkaUtils$.createDirectStream(KafkaUtils.scala:484)
at org.apache.spark.streaming.kafka.KafkaUtils$.createDirectStream(KafkaUtils.scala:607)
at org.apache.spark.streaming.kafka.KafkaUtils.createDirectStream(KafkaUtils.scala)
at SparkStreaming.main(SparkStreaming.java:41)
是否有来自kafkaParams的指定参数或某些内容才能获得向Kafka验证的火花流?
当时,我在Kafka代理服务器.properties中添加了sasl plaintext安全参数。
authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
listeners=SASL_PLAINTEXT://:9092
security.inter.broker.protocol=SASL_PLAINTEXT
sasl.mechanism.inter.broker.protocol=PLAIN
sasl.enabled.mechanisms=PLAIN
super.users=User:admin
这里也是我的kafka_jaas_server.conf
KafkaServer {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="admin"
password="admin1!"
user_admin="admin1!"
user_aldys="admin1!";
};
这是我的kafka_jaas_client.conf
KafkaClient {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="aldys"
password="admin1!";
};
我在启动kafka代理时也包含了我的jaas服务器配置。通过将最后一行中的kafka-server-start.sh编辑为:
exec $base_dir/kafka-run-class.sh $EXTRA_ARGS -Djava.security.auth.login.config=/etc/kafka/kafka_jaas_server.conf kafka.Kafka "$@"
使用此参数,我可以生成并使用我之前设置ACL的主题。
这是我的java代码
import java.util.*;
import org.apache.kafka.common.serialization.StringDeserializer;
import org.apache.spark.SparkConf;
import org.apache.spark.streaming.Duration;
import org.apache.spark.streaming.api.java.JavaDStream;
import org.apache.spark.streaming.api.java.JavaPairInputDStream;
import org.apache.spark.streaming.api.java.JavaStreamingContext;
import org.apache.spark.streaming.kafka.KafkaUtils;
import kafka.serializer.StringDecoder;
import scala.Tuple2;
public class SparkStreaming {
public static void main(String args[]) throws Exception {
if (args.length < 2) {
System.err.println("Usage: SparkStreaming <brokers> <topics>\n" +
" <brokers> is a list of one or more Kafka brokers\n" +
" <topics> is a list of one or more kafka topics to consume from\n\n");
System.exit(1);
}
String brokers = args[0];
String topics = args[1];
Set<String> topicsSet = new HashSet<>(Arrays.asList(topics.split(",")));
Map<String, String> kafkaParams = new HashMap<>();
kafkaParams.put("bootstrap.servers", "localhost:9092");
kafkaParams.put("group.id", "group1");
kafkaParams.put("auto.offset.reset", "smallest");
kafkaParams.put("security.protocol", "SASL_PLAINTEXT");
SparkConf sparkConf = new SparkConf()
.setAppName("SparkStreaming")
.setMaster("local[2]");
JavaStreamingContext jssc = new JavaStreamingContext(sparkConf, new Duration(2000));
JavaPairInputDStream<String, String> messages = KafkaUtils.createDirectStream(
jssc,
String.class,
String.class,
StringDecoder.class,
StringDecoder.class,
kafkaParams,
topicsSet
);
messages.print();
jssc.start();
jssc.awaitTermination();
}
}
此处依赖我在我的pom.xml中使用
<dependencies>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.11</artifactId>
<version>2.1.1</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-streaming_2.11</artifactId>
<version>2.1.1</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-streaming-kafka_2.11</artifactId>
<version>1.6.3</version>
</dependency>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka_2.10</artifactId>
<version>0.10.2.1</version>
</dependency>
</dependencies>
答案 0 :(得分:0)
我已经通过https://spark.apache.org/docs/latest/streaming-kafka-0-10-integration.html的指南解决了我的问题。
我将我的pom.xml中的spark-streaming-kafka_2.11替换为spark-streaming-kafka-0-10_2.11和2.11版。
基于上述错误日志。我很好奇SimpleConsumer抛出的错误,SimpleConsumer被确定为一个老消费者。然后我按照上面的说法替换我的pom依赖项,并将我的代码更改为上面的以下spark流集成指南。现在我可以流入安全的sasl plain kafka。
答案 1 :(得分:0)
您的控制台生产者/消费者是否在工作?如果不是,则应该再次检查您的kafka服务器配置和jaas配置。
否则,我希望您提出一些建议...
添加jaas文件以发火花,
.config("spark.driver.extraJavaOptions","-Djava.security.auth.login.config=/path/to/jaas.conf")
.config("spark.executor.extraJavaOptions","-Djava.security.auth.login.config=/path/to/jaas.conf")
或者您可以使用--conf
将其添加到spark提交中确保jaas文件具有读取权限。
还必须配置服务名称,该名称应与Kafka经纪人的主体名称匹配。
例如:-kafka/hostname.com@EXAMPLE.com
然后添加
kafkaParams.put("sasl.kerberos.service.name", "kafka");