Alternative to openjdk:8-alpine for Kafka Streams

Related searches

I am using openjdk:8-alpine for deploying Kafka Streams application. I am using Windowing and it crashes with below error:

Exception in thread "app-4a382bdc55ae-StreamThread-1" java.lang.UnsatisfiedLinkError: /tmp/librocksdbjni94709417646402513.so: Error loading shared library ld-linux-x86-64.so.2: No such file or directory (needed by /tmp/librocksdbjni94709417646402513.so)
    at java.lang.ClassLoader$NativeLibrary.load(Native Method)
    at java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1941)
    at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1824)
    at java.lang.Runtime.load0(Runtime.java:809)
    at java.lang.System.load(System.java:1086)
    at org.rocksdb.NativeLibraryLoader.loadLibraryFromJar(NativeLibraryLoader.java:78)
    at org.rocksdb.NativeLibraryLoader.loadLibrary(NativeLibraryLoader.java:56)
    at org.rocksdb.RocksDB.loadLibrary(RocksDB.java:64)
    at org.rocksdb.RocksDB.<clinit>(RocksDB.java:35)
    at org.rocksdb.Options.<clinit>(Options.java:22)
    at org.apache.kafka.streams.state.internals.RocksDBStore.openDB(RocksDBStore.java:116)
    at org.apache.kafka.streams.state.internals.Segment.openDB(Segment.java:43)
    at org.apache.kafka.streams.state.internals.Segments.getOrCreateSegment(Segments.java:91)
    at org.apache.kafka.streams.state.internals.RocksDBSegmentedBytesStore.put(RocksDBSegmentedBytesStore.java:100)
    at org.apache.kafka.streams.state.internals.RocksDBSessionStore.put(RocksDBSessionStore.java:122)
    at org.apache.kafka.streams.state.internals.ChangeLoggingSessionBytesStore.put(ChangeLoggingSessionBytesStore.java:78)
    at org.apache.kafka.streams.state.internals.ChangeLoggingSessionBytesStore.put(ChangeLoggingSessionBytesStore.java:33)
    at org.apache.kafka.streams.state.internals.CachingSessionStore.putAndMaybeForward(CachingSessionStore.java:177)
    at org.apache.kafka.streams.state.internals.CachingSessionStore.access$000(CachingSessionStore.java:38)
    at org.apache.kafka.streams.state.internals.CachingSessionStore$1.apply(CachingSessionStore.java:88)
    at org.apache.kafka.streams.state.internals.NamedCache.flush(NamedCache.java:142)
    at org.apache.kafka.streams.state.internals.NamedCache.flush(NamedCache.java:100)
    at org.apache.kafka.streams.state.internals.ThreadCache.flush(ThreadCache.java:127)
    at org.apache.kafka.streams.state.internals.CachingSessionStore.flush(CachingSessionStore.java:193)
    at org.apache.kafka.streams.state.internals.MeteredSessionStore.flush(MeteredSessionStore.java:169)
    at org.apache.kafka.streams.processor.internals.ProcessorStateManager.flush(ProcessorStateManager.java:244)
    at org.apache.kafka.streams.processor.internals.AbstractTask.flushState(AbstractTask.java:195)
    at org.apache.kafka.streams.processor.internals.StreamTask.flushState(StreamTask.java:332)
    at org.apache.kafka.streams.processor.internals.StreamTask$1.run(StreamTask.java:312)
    at org.apache.kafka.streams.processor.internals.StreamsMetricsImpl.measureLatencyNs(StreamsMetricsImpl.java:208)
    at org.apache.kafka.streams.processor.internals.StreamTask.commit(StreamTask.java:307)
    at org.apache.kafka.streams.processor.internals.StreamTask.commit(StreamTask.java:297)
    at org.apache.kafka.streams.processor.internals.AssignedTasks$1.apply(AssignedTasks.java:67)
    at org.apache.kafka.streams.processor.internals.AssignedTasks.applyToRunningTasks(AssignedTasks.java:357)
    at org.apache.kafka.streams.processor.internals.AssignedTasks.commit(AssignedTasks.java:347)
    at org.apache.kafka.streams.processor.internals.TaskManager.commitAll(TaskManager.java:403)
    at org.apache.kafka.streams.processor.internals.StreamThread.maybeCommit(StreamThread.java:994)
    at org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:811)
    at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:750)
    at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:720)

Searching for the above issue, i came across https://issues.apache.org/jira/browse/KAFKA-4988. But it didn't helped.

So, Alpine uses musl-libc but it's not supported by RocksDB. The issue to add support for musl-libc to RocksDB: facebook/rocksdb#3143.

Question:Is there any openjdk docker image using which i can make my Kafka Stream application run and which won't give rocksdb issue?

Edit-1: I tried RUN apk add --no-cache bash libc6-compat, but it too fails with below error:

# A fatal error has been detected by the Java Runtime Environment:
#
#  SIGSEGV (0xb) at pc=0x000000000011e336, pid=1, tid=0x00007fc6a3cc8ae8
#
# JRE version: OpenJDK Runtime Environment (8.0_181-b13) (build 1.8.0_181-b13)
# Java VM: OpenJDK 64-Bit Server VM (25.181-b13 mixed mode linux-amd64 compressed oops)
# Derivative: IcedTea 3.9.0
# Distribution: Custom build (Tue Oct 23 11:27:22 UTC 2018)
# Problematic frame:
# C  0x000000000011e336
#
# Failed to write core dump. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again

The solution which worked for me was to change the docker image from openjdk:8-alpine to adoptopenjdk/openjdk8:alpine-slim.

adoptopenjdk/openjdk8:alpine-slim is glibc compatible.

I came to know about this image from http://blog.gilliard.lol/2018/11/05/alpine-jdk11-images.html.

Hope it helps someone.

Kafka Streams and Alpine linux incompatibility � Issue #6114 , Kafka Streams and Alpine linux incompatibility #6114 Quarkus using Dockerfile.jvm with fabric8/java-alpine-openjdk8-jre as base image. being a 100% satisfying glibc alternative for OpenJDK (there's Portola, an ongoing� Find Kafka Streams. Search Faster, Better & Smarter at ZapMeta Now!

kafka-streams application throws UnsatisfiedLinkError with openjdk , kafka-streams application throws below error with openjdk:14-alpine in docker. alternative-to-openjdk8-alpine-for-kafka-streams. Find Kafka streams on Directhit for San Bernardino. Results for Kafka streams in San Bernardino

Rather then changing your default docker base image, you can build glibc for an Alpine distro. Even better than that, you can go and grab the pre-built apk from Sasha Gerrand's github page. Here's what we added to our Dockerfile to get this all working with his prebuilt apk:

# # GLIBC - Kafka Dependency (RocksDB)
# Used by Kafka for default State Stores.
# glibc's apk was built for Alpine Linux and added to our repository
# from this source: https://github.com/sgerrand/alpine-pkg-glibc/
ARG GLIBC_APK=glibc-2.30-r0.apk
COPY ${KAFKA_DIR}/${GLIBC_APK} opt/
RUN apk add --no-cache --allow-untrusted opt/${GLIBC_APK}

# C++ Std Lib - Kafka Dependency (RocksDB)
RUN apk add --no-cache libstdc++

替代openjdk:Kalka Streams的8-alpine Alternative to , Description. I'm developing my Kafka Streams application using Docker and I run my jars using the official openjdk:8-jre-alpine image. Cf a similar issue with AIX : https://github.com/facebook/rocksdb/issues/2071. Murad M. TheAnswerHub is a top destination for finding answers online. Browse our content today! Find kafka streams on TheAnswerHub.com.

[#KAFKA-4988] JVM crash when running on Alpine Linux, In this post, we will deal with the dockerization of the Kafka Streams The second part is a slimmed container with openjdk version 8. The alpine version does not have the ld-linux-x86–64.so.2 needed by the application. Alternatives of Kafka. Below is the list of top-ranking alternatives of Kafka. 1. Amazon Kinesis. AWS Kinesis or Kinesis Streams is considered to be based on Kafka. Kinesis is known for its super-fast speed, reliability, ease of operation and its cross-platform replication ability.

Dockerizing a Kafka Streams app. Docker images are easy to use , Filters, also known as "groks", are used to query a log stream. They are provided in a configuration file, that also configures source stream and output streams. Since they are stored in a file, they can be under version control and changes can be reviewed (for example, as part of a Git pull request). See More

I'm developing my Kafka Streams application using Docker and I run my jars using the official openjdk:8-jre-alpine image. I'm just starting to use windowing and now the JVM crashes because of an issue with RocksDB I think. It's trivial to fix on my part, just use the debian jessie based image.

Comments
  • have you tried this solution ? github.com/wurstmeister/kafka-docker/issues/…
  • @MostafaHussein Yes. I tried it and it gave Segmentation Fault error.
  • Have you tried using another base image? Like adoptopenjdk/openjdk8:slim
  • @bratkartoffel No, i haven't tried any other openjdk variant. Will it solve my problem?
  • @Mukeshprajapati I have tried to install the package using: apk add --no-cache bash libc6-compat and it worked. Can you ensure that you have the latest version of the image ? image id 792ff45a2a17
  • I tried... but unfortunately it is failing with Segmentation Fault.
  • @Mukeshprajapati I see. Could you try using anapsix/alpine-java:8 as the base image? It is Alpine 3.8 + glibc + Oracle Java 8 (hub.docker.com/r/anapsix/alpine-java). It may be more compatible with Kafka.
  • @Mukeshprajapati Also, is JVM crash log (hs_err*.log) being generated? If it is, please post it - it could help diagnosing the exact issue.
  • I got it working with adoptopenjdk/openjdk8:alpine-slim. Thanks for helping.
  • @Mukeshprajapati Good job! I see that image is glibc-enabled, too, using the ordinary (glibc compatible) OpenJDK Linux binaries.