Spark: Unsupported class version error

Related searches

I am trying to run a java spark job, using spark-submit, on a cluster where all nodes have java 1.7 installed.

The job fails with a java.lang.UnsupportedClassVersionError: com/windlogics/dmf/wether/MyClass: Unsupported major.minor version 51.0.

This error seems to be caused by compiling with an lower version of java and running with a higher version. However, I have verified that the code is being compiled with 1.7.

Also, the job works fine when the master is set to local. How can I go about debugging and fixing this error?

A part of the error log is below.

15/01/21 15:14:57 WARN scheduler.TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0, sphddp07.zzz.local): java.lang.UnsupportedClassVersionError: com/zzz/dmf/wether/MyClass: Unsupported major.minor version 51.0 java.lang.ClassLoader.defineClass1(Native Method) java.lang.ClassLoader.defineClassCond(ClassLoader.java:631) java.lang.ClassLoader.defineClass(ClassLoader.java:615) java.security.SecureClassLoader.defineClass(SecureClassLoader.java:141) java.net.URLClassLoader.defineClass(URLClassLoader.java:283) java.net.URLClassLoader.access$000(URLClassLoader.java:58) java.net.URLClassLoader$1.run(URLClassLoader.java:197) java.security.AccessController.doPrivileged(Native Method) java.net.URLClassLoader.findClass(URLClassLoader.java:190) java.lang.ClassLoader.loadClass(ClassLoader.java:306) java.lang.ClassLoader.loadClass(ClassLoader.java:247) java.lang.Class.forName0(Native Method) java.lang.Class.forName(Class.java:247) org.apache.spark.serializer.JavaDeserializationStream$$anon$1.resolveClass(JavaSerializer.scala:59) java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1574) java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1495) java.io.ObjectInputStream.readClass(ObjectInputStream.java:1461) java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1311) java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1946) java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1870) java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1752) java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1328) java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1946) java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1870) java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1752) java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1328) java.io.ObjectInputStream.readObject(ObjectInputStream.java:350) scala.collection.immutable.$colon$colon.readObject(List.scala:362) sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) java.lang.reflect.Method.invoke(Method.java:597) java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:969) java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1848) java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1752) java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1328) java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1946) java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1870) java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1752) java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1328) java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1946) java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1870) java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1752) java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1328) java.io.ObjectInputStream.readObject(ObjectInputStream.java:350) scala.collection.immutable.$colon$colon.readObject(List.scala:362) sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) java.lang.reflect.Method.invoke(Method.java:597) java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:969) java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1848) java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1752) java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1328) java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1946) java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1870) java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1752) java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1328) java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1946) java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1870) java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1752) java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1328) java.io.ObjectInputStream.readObject(ObjectInputStream.java:350) org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:62) org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:87) org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:57) org.apache.spark.scheduler.Task.run(Task.scala:54) org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:177) java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)

I encountered the same error message. I discovered that when I typed 'java -version' it was 1.7. I needed Java 8. Here's how to update:

sudo yum install java-1.8.0
sudo alternatives --config java

Exception in thread "main" java.lang.UnsupportedClassVersionError , hi, everybody! I use mmlspark to get a classifier model, and the parameters of environment are: spark version: 2.1.0 scala version: 2.11.8� Is it a dev machine in your company or your own VM ?This error happens when Spark can not point to correct version of JAVA_HOME on which its native jars were bound

I had the same issue tried setting appropriate JAVA_HOME in bashrc files on both master and slave machines, but this did not help.

Then when I set JAVA_HOME at whole cluster level, major.minor error is gone. I use cloudera, so I had to set JAVA_HOME in cloudera manager to resolve this error.

Unsupported major.minor � Issue #300 � dotnet/spark � GitHub, Spark java.lang.UnsupportedClassVersionError: xxxxxx: Unsupported major. minor version 52.0 Solution, Programmer Sought, the best programmer technical � Getting the error when running .toPandas(), I've tried the solution mentioned in Pyspark error - Unsupported class file major version 55 and Pyspark .toPandas

Please look at the Spark version and corresponding JDK requirements from Spark Documentation page https://spark.apache.org/docs/latest/.

For example, Spark 2.4.5 runs on Java 8 and I had Java 13 on my system. Resolved the issue by moving my system to the required JDK.

Spark java.lang.UnsupportedClassVersionError: xxxxxx , UnsupportedClassVersionError and Unsupported major.minor version 52.0. spark streamingclusterdeployment. Question by isba2007 � Apr 03,� thanks for the report. I'm not sure what the cause is. I plan to investigate, but I have some other, more pressing community build stuff to attend to first.

Exception while deploying jars: Exception in thread "main" java.lang , This error happens when Spark can not point to correct version of JAVA_HOME on which its native jars were bound. do an env| grep -i java� At a low level it means your version of Java is incompatible with the one with which Gradle was compiled. If you're using Java 13, you could try and install and set as default Java 12, or upgrade Gradle (although I don't know if it's available for Java 13).

How to fix an UnsupportedClassVersionError when launching , When JRE or JVM which is running the class doesn't able to understand the class file version they throw java.lang.UnsupportedClassVersionError: XXX : Unsupported major.minor version 51.0 error, where XXX is the name of your class which has an incompatible version. Solution 1 - Upgrade JRE

I was able to make some progress. In order to get a spark running at home I'm using the following docker-compose.yml. version: "2.2" services: master: image: gettyimages/spark restart: always command: bin/spark-class org.apache.spark.deploy.master.Master -h master hostname: master environment: MASTER: spark://master:7077 SPARK_CONF_DIR: /conf SPARK_PUBLIC_DNS: localhost expose: - 7001 - 7002

Comments
  • "all nodes have java 1.7 installed" - but are you running with Java 1.7? Can you log the value of the java.version system property?
  • On the node that it fails on this is the java version -bash-3.2$ java -version java version "1.7.0_45" Java(TM) SE Runtime Environment (build 1.7.0_45-b18) Java HotSpot(TM) 64-Bit Server VM (build 24.45-b08, mixed mode)
  • Well that's what happens when you run java from the command line from bash - but is that definitely the version that's running and failing? That's why I suggested you should add some logging.
  • I added System.err.println("Java Version: " + System.getProperty("java.version")) in my main method, which produced "Java Version: 1.7.0_45". Although, I don't actually know if this piece of code is being run on the cluster node.
  • Right, so you need to find that out. Sorry to be so picky, but when it looks like an environmental problem, you really need to find out the exact version running where the problem is. (Version 51 is for Java 1.7, so it looks like something is running an earlier version somewhere.)
  • you can have multi version of Java and still run the code, only thing you need is to set the proper Java Home and you are good to go.