替代openjdk:Kalka Streams的8-alpine

时间:2019-03-04 13:34:54

标签: docker apache-kafka apache-kafka-streams alpine rocksdb

我正在使用openjdk:8-alpine部署Kafka Streams应用程序。我正在使用Windowing,它崩溃并显示以下错误:

Exception in thread "app-4a382bdc55ae-StreamThread-1" java.lang.UnsatisfiedLinkError: /tmp/librocksdbjni94709417646402513.so: Error loading shared library ld-linux-x86-64.so.2: No such file or directory (needed by /tmp/librocksdbjni94709417646402513.so)
    at java.lang.ClassLoader$NativeLibrary.load(Native Method)
    at java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1941)
    at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1824)
    at java.lang.Runtime.load0(Runtime.java:809)
    at java.lang.System.load(System.java:1086)
    at org.rocksdb.NativeLibraryLoader.loadLibraryFromJar(NativeLibraryLoader.java:78)
    at org.rocksdb.NativeLibraryLoader.loadLibrary(NativeLibraryLoader.java:56)
    at org.rocksdb.RocksDB.loadLibrary(RocksDB.java:64)
    at org.rocksdb.RocksDB.<clinit>(RocksDB.java:35)
    at org.rocksdb.Options.<clinit>(Options.java:22)
    at org.apache.kafka.streams.state.internals.RocksDBStore.openDB(RocksDBStore.java:116)
    at org.apache.kafka.streams.state.internals.Segment.openDB(Segment.java:43)
    at org.apache.kafka.streams.state.internals.Segments.getOrCreateSegment(Segments.java:91)
    at org.apache.kafka.streams.state.internals.RocksDBSegmentedBytesStore.put(RocksDBSegmentedBytesStore.java:100)
    at org.apache.kafka.streams.state.internals.RocksDBSessionStore.put(RocksDBSessionStore.java:122)
    at org.apache.kafka.streams.state.internals.ChangeLoggingSessionBytesStore.put(ChangeLoggingSessionBytesStore.java:78)
    at org.apache.kafka.streams.state.internals.ChangeLoggingSessionBytesStore.put(ChangeLoggingSessionBytesStore.java:33)
    at org.apache.kafka.streams.state.internals.CachingSessionStore.putAndMaybeForward(CachingSessionStore.java:177)
    at org.apache.kafka.streams.state.internals.CachingSessionStore.access$000(CachingSessionStore.java:38)
    at org.apache.kafka.streams.state.internals.CachingSessionStore$1.apply(CachingSessionStore.java:88)
    at org.apache.kafka.streams.state.internals.NamedCache.flush(NamedCache.java:142)
    at org.apache.kafka.streams.state.internals.NamedCache.flush(NamedCache.java:100)
    at org.apache.kafka.streams.state.internals.ThreadCache.flush(ThreadCache.java:127)
    at org.apache.kafka.streams.state.internals.CachingSessionStore.flush(CachingSessionStore.java:193)
    at org.apache.kafka.streams.state.internals.MeteredSessionStore.flush(MeteredSessionStore.java:169)
    at org.apache.kafka.streams.processor.internals.ProcessorStateManager.flush(ProcessorStateManager.java:244)
    at org.apache.kafka.streams.processor.internals.AbstractTask.flushState(AbstractTask.java:195)
    at org.apache.kafka.streams.processor.internals.StreamTask.flushState(StreamTask.java:332)
    at org.apache.kafka.streams.processor.internals.StreamTask$1.run(StreamTask.java:312)
    at org.apache.kafka.streams.processor.internals.StreamsMetricsImpl.measureLatencyNs(StreamsMetricsImpl.java:208)
    at org.apache.kafka.streams.processor.internals.StreamTask.commit(StreamTask.java:307)
    at org.apache.kafka.streams.processor.internals.StreamTask.commit(StreamTask.java:297)
    at org.apache.kafka.streams.processor.internals.AssignedTasks$1.apply(AssignedTasks.java:67)
    at org.apache.kafka.streams.processor.internals.AssignedTasks.applyToRunningTasks(AssignedTasks.java:357)
    at org.apache.kafka.streams.processor.internals.AssignedTasks.commit(AssignedTasks.java:347)
    at org.apache.kafka.streams.processor.internals.TaskManager.commitAll(TaskManager.java:403)
    at org.apache.kafka.streams.processor.internals.StreamThread.maybeCommit(StreamThread.java:994)
    at org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:811)
    at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:750)
    at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:720)

在搜索上述问题时,我遇到了https://issues.apache.org/jira/browse/KAFKA-4988。但这没有帮助。

因此,Alpine使用musl-libc,但RocksDB不支持它。 将对musl-libc的支持添加到RocksDB的问题:facebook/rocksdb#3143

问题:是否有任何openjdk Docker映像可用来使我的Kafka Stream应用程序运行,而不会给rockdb问题?

编辑1:我尝试了RUN apk add --no-cache bash libc6-compat,但是它也失败,并出现以下错误:

# A fatal error has been detected by the Java Runtime Environment:
#
#  SIGSEGV (0xb) at pc=0x000000000011e336, pid=1, tid=0x00007fc6a3cc8ae8
#
# JRE version: OpenJDK Runtime Environment (8.0_181-b13) (build 1.8.0_181-b13)
# Java VM: OpenJDK 64-Bit Server VM (25.181-b13 mixed mode linux-amd64 compressed oops)
# Derivative: IcedTea 3.9.0
# Distribution: Custom build (Tue Oct 23 11:27:22 UTC 2018)
# Problematic frame:
# C  0x000000000011e336
#
# Failed to write core dump. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again

4 个答案:

答案 0 :(得分:1)

对我有用的解决方案是将docker映像从openjdk:8-alpine更改为adoptopenjdk/openjdk8:alpine-slim

adoptopenjdk/openjdk8:alpine-slim glibc兼容的。

我从http://blog.gilliard.lol/2018/11/05/alpine-jdk11-images.html得知了这张图片。

希望它可以帮助某人。

答案 1 :(得分:0)

您链接的票证https://issues.apache.org/jira/browse/KAFKA-4988可让您深入了解该问题。

如前所述,看起来RocksDB与musl libc不兼容,因此需要glibc。

安装libc6-compact可能不会这样做:它在musl libc上提供了一个兼容层,该层模仿glibc库结构并实现了一些缺少的功能,但与本身安装glibc并不相同。 glibc是一个复杂的实现,因此兼容性库和实际glibc之间可能没有一对一的关联。参见here,了解musl / glibc的一些细微差别。

读取票证注释后,故障库可能是librocksdbjni.so,具体取决于libstdc ++ 6。

因此,我将尝试以下操作(将openjdk:8-alpine作为您的基本图像):

答案 2 :(得分:0)

您可以更改Alpine发行版的glibc,而不必更改默认的Docker基本映像。甚至比这更好的是,您可以从Sasha Gerrand's github page获取预构建的apk。这是我们添加到Dockerfile中的内容,以使其与他的预构建apk一起工作:

# # GLIBC - Kafka Dependency (RocksDB)
# Used by Kafka for default State Stores.
# glibc's apk was built for Alpine Linux and added to our repository
# from this source: https://github.com/sgerrand/alpine-pkg-glibc/
ARG GLIBC_APK=glibc-2.30-r0.apk
COPY ${KAFKA_DIR}/${GLIBC_APK} opt/
RUN apk add --no-cache --allow-untrusted opt/${GLIBC_APK}

# C++ Std Lib - Kafka Dependency (RocksDB)
RUN apk add --no-cache libstdc++

答案 3 :(得分:0)

https://issues.apache.org/jira/browse/KAFKA-4988 有一个关于 Kafka Streams 和 Alpine linux 不兼容的已知问题。 对于那些使用 Java 11 的人来说,adoptopenjdk/openjdk11:alpine-slim 对我来说很好用。 另一种解决方案是仍然使用 openjdk:11-jdk-alpine 镜像作为基础镜像,然后手动安装 snappy-java lib

FROM openjdk:11-jdk-alpine
RUN apk update && apk add --no-cache gcompat
...