我正在尝试在kubernetes中运行Kafka Streams应用程序。当我启动pod时,我得到以下异常:
Exception in thread "streams-pipe-e19c2d9a-d403-4944-8d26-0ef27ed5c057-StreamThread-1"
java.lang.UnsatisfiedLinkError: /tmp/snappy-1.1.4-5cec5405-2ce7-4046-a8bd-922ce96534a0-libsnappyjava.so:
Error loading shared library ld-linux-x86-64.so.2: No such file or directory
(needed by /tmp/snappy-1.1.4-5cec5405-2ce7-4046-a8bd-922ce96534a0-libsnappyjava.so)
at java.lang.ClassLoader$NativeLibrary.load(Native Method)
at java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1941)
at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1824)
at java.lang.Runtime.load0(Runtime.java:809)
at java.lang.System.load(System.java:1086)
at org.xerial.snappy.SnappyLoader.loadNativeLibrary(SnappyLoader.java:179)
at org.xerial.snappy.SnappyLoader.loadSnappyApi(SnappyLoader.java:154)
at org.xerial.snappy.Snappy.<clinit>(Snappy.java:47)
at org.xerial.snappy.SnappyInputStream.hasNextChunk(SnappyInputStream.java:435)
at org.xerial.snappy.SnappyInputStream.read(SnappyInputStream.java:466)
at java.io.DataInputStream.readByte(DataInputStream.java:265)
at org.apache.kafka.common.utils.ByteUtils.readVarint(ByteUtils.java:168)
at org.apache.kafka.common.record.DefaultRecord.readFrom(DefaultRecord.java:292)
at org.apache.kafka.common.record.DefaultRecordBatch$1.readNext(DefaultRecordBatch.java:264)
at org.apache.kafka.common.record.DefaultRecordBatch$RecordIterator.next(DefaultRecordBatch.java:563)
at org.apache.kafka.common.record.DefaultRecordBatch$RecordIterator.next(DefaultRecordBatch.java:532)
at org.apache.kafka.clients.consumer.internals.Fetcher$PartitionRecords.nextFetchedRecord(Fetcher.java:1060)
at org.apache.kafka.clients.consumer.internals.Fetcher$PartitionRecords.fetchRecords(Fetcher.java:1095)
at org.apache.kafka.clients.consumer.internals.Fetcher$PartitionRecords.access$1200(Fetcher.java:949)
at org.apache.kafka.clients.consumer.internals.Fetcher.fetchRecords(Fetcher.java:570)
at org.apache.kafka.clients.consumer.internals.Fetcher.fetchedRecords(Fetcher.java:531)
at org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1146)
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1103)
at org.apache.kafka.streams.processor.internals.StreamThread.pollRequests(StreamThread.java:851)
at org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:808)
at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:774)
at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:744)
以前我尝试使用docker容器启动kafka和kafka-streams-app,它们工作得非常好。这是我第一次尝试使用Kubernetes。
这是我的 DockerFile StreamsApp :
FROM openjdk:8u151-jdk-alpine3.7
COPY /target/streams-examples-0.1.jar /streamsApp/
COPY /target/libs /streamsApp/libs
CMD ["java", "-jar", "/streamsApp/streams-examples-0.1.jar"]
如何解决此问题?请帮助我。
修改
/ # ldd /usr/bin/java
/lib/ld-musl-x86_64.so.1 (0x7f03f279a000)
Error loading shared library libjli.so: No such file or directory (needed by /usr/bin/java)
libc.musl-x86_64.so.1 => /lib/ld-musl-x86_64.so.1 (0x7f03f279a000)
Error relocating /usr/bin/java: JLI_Launch: symbol not found
答案 0 :(得分:10)
错误消息指出* libsnappyjava.so无法找到ld-linux-x86-64.so.2。这是一个glibc动态加载程序,而Alpine映像无法与glibc一起运行。您可以通过在Dockerfile中安装libc6-compat软件包来尝试使其运行,例如:
RUN apk update && apk add --no-cache libc6-compat
答案 1 :(得分:4)
就我而言,安装缺少的libc6-compat无效。应用程序仍然抛出java.lang.UnsatisfiedLinkError
。
然后我在泊坞窗中发现/lib64/ld-linux-x86-64.so.2
存在,并且是/lib/libc.musl-x86_64.so.1
的链接,但是/lib
仅包含ld-musl-x86_64.so.1
,不包含ld-linux-x86-64.so.2
。
因此,我在ld-linux-x86-64.so.2
目录中添加了一个名为ld-musl-x86_64.so.1
的文件,该文件链接到/lib
,并解决了问题。
我使用的Dockerfile:
FROM openjdk:8-jre-alpine
ENV TZ Asia/Shanghai
COPY entrypoint.sh /entrypoint.sh
RUN apk update && \
apk add --no-cache tzdata && \
apk add --no-cache libc6-compat && \
ln -s /lib/libc.musl-x86_64.so.1 /lib/ld-linux-x86-64.so.2 && \
mkdir /app && \
chmod a+x /entrypoint.sh
COPY build/libs/*.jar /app
ENTRYPOINT ["/entrypoint.sh"]
结论:
RUN apk update && apk add --no-cache libc6-compat
ln -s /lib/libc.musl-x86_64.so.1 /lib/ld-linux-x86-64.so.2
答案 2 :(得分:2)
此问题有两种解决方案:
您可以将其他一些基本映像与预安装的snappy-java
lib一起使用。例如openjdk:8-jre-slim
对我来说效果很好:
另一种解决方案是仍然使用openjdk:8-jdk-alpine
映像作为基础映像,然后手动安装snappy-java
lib:
FROM openjdk:8-jdk-alpine
RUN apk update && apk add --no-cache gcompat
...
答案 3 :(得分:1)
如果你是通过 build.sbt 添加 docker 文件,那么正确的做法是
dockerfile in docker := {
val artifact: File = assembly.value
val artifactTargetPath = s"/app/${artifact.name}"
new Dockerfile {
from("openjdk:8-jre-alpine")
copy(artifact, artifactTargetPath)
run("apk", "add", "--no-cache", "gcompat")
entryPoint("java", "-jar", artifactTargetPath)
}
安装 gcompat 将达到您的目的
答案 4 :(得分:0)
我已经实现了一个docker镜像,通过该镜像我可以运行Spring Boot微服务,并且Kafka Strean拓扑能够完美运行。
我在这里共享Dockerfile文件。
FROM openjdk:8-jdk-alpine
# Add Maintainer Info
LABEL description="Spring Boot Kafka Stream IoT Processor"
# Args for image
ARG PORT=8080
RUN apk update && apk upgrade && apk add --no-cache gcompat
RUN ln -s /bin/bash /usr/bin
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY resources/wait-for-it.sh wait-for-it.sh
COPY target/iot_processor.jar app.jar
RUN dos2unix wait-for-it.sh
RUN chmod +x wait-for-it.sh
RUN uname -a
RUN pwd
RUN ls -al
EXPOSE ${PORT}
CMD ["sh", "-c", "echo 'waiting for 300 seconds for kafka:9092 to be accessable before
starting application' && ./wait-for-it.sh -t 300 kafka:9092 -- java -jar app.jar"]
希望它可以帮助某人
答案 5 :(得分:0)
在带有 alpine 内核的 docker 中
运行apk update && apk add --no-cache libc6-compat gcompat
救我一命
答案 6 :(得分:0)
我不需要在 dockerFile 中添加 libc6-compat
因为文件 /lib/libc.musl-x86_64.so.1 存在于我的容器中
在 dockerFile 中只添加
run ln -s /lib/libc.musl-x86_64.so.1 /lib/ld-linux-x86-64.so.2
我的容器在快速压缩时消耗 msg 时没有错误
Exception in thread "streams-pipe-e19c2d9a-d403-4944-8d26-0ef27ed5c057-StreamThread-1"
java.lang.UnsatisfiedLinkError: /tmp/snappy-1.1.4-5cec5405-2ce7-4046-a8bd-
922ce96534a0-libsnappyjava.so:
Error loading shared library ld-linux-x86-64.so.2: No such file or directory
(needed by /tmp/snappy-1.1.4-5cec5405-2ce7-4046-a8bd-922ce96534a0-libsnappyjava.so)
at java.lang.ClassLoader$NativeLibrary.load(Native Method)