当前位置: 首页 > 知识库问答 >
问题:

火花:IllegalArgumentException:“不受支持的类文件主版本55”

锺博耘
2023-03-14

运行.topandas()时遇到错误,我尝试了Pyspark error-Unsupported class file major version 55和Pyspark.topandas():'Unsupported class file major version 55'中提到的解决方案,但没有成功。

...
export JAVA_HOME=$(/usr/libexec/java_home -v 1.8)
...
java version "1.8.0_201"
Java(TM) SE Runtime Environment (build 1.8.0_201-b09)
Java HotSpot(TM) 64-Bit Server VM (build 25.201-b09, mixed mode)

完整错误日志:

Py4JJavaError: An error occurred while calling o49.collectToPython.
: java.lang.IllegalArgumentException: Unsupported class file major version 55
    at org.apache.xbean.asm6.ClassReader.<init>(ClassReader.java:166)
    at org.apache.xbean.asm6.ClassReader.<init>(ClassReader.java:148)
    at org.apache.xbean.asm6.ClassReader.<init>(ClassReader.java:136)
    at org.apache.xbean.asm6.ClassReader.<init>(ClassReader.java:237)
    at org.apache.spark.util.ClosureCleaner$.getClassReader(ClosureCleaner.scala:49)
    at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:517)
    at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:500)
    at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
    at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:134)
    at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:134)
    at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:236)
    at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
    at scala.collection.mutable.HashMap$$anon$1.foreach(HashMap.scala:134)
    at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
    at org.apache.spark.util.FieldAccessFinder$$anon$3.visitMethodInsn(ClosureCleaner.scala:500)
    at org.apache.xbean.asm6.ClassReader.readCode(ClassReader.java:2175)
    at org.apache.xbean.asm6.ClassReader.readMethod(ClassReader.java:1238)
    at org.apache.xbean.asm6.ClassReader.accept(ClassReader.java:631)
    at org.apache.xbean.asm6.ClassReader.accept(ClassReader.java:355)
    at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:307)
    at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:306)
    at scala.collection.immutable.List.foreach(List.scala:392)
    at org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:306)
    at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:162)
    at org.apache.spark.SparkContext.clean(SparkContext.scala:2326)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:2100)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:2126)
    at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:945)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
    at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
    at org.apache.spark.rdd.RDD.collect(RDD.scala:944)
    at org.apache.spark.sql.execution.SparkPlan.executeCollect(SparkPlan.scala:299)
    at org.apache.spark.sql.Dataset$$anonfun$collectToPython$1.apply(Dataset.scala:3258)
    at org.apache.spark.sql.Dataset$$anonfun$collectToPython$1.apply(Dataset.scala:3255)
    at org.apache.spark.sql.Dataset$$anonfun$53.apply(Dataset.scala:3365)
    at org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1.apply(SQLExecution.scala:78)
    at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:125)
    at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:73)
    at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3364)
    at org.apache.spark.sql.Dataset.collectToPython(Dataset.scala:3255)
    at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.base/java.lang.reflect.Method.invoke(Method.java:566)
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
    at py4j.Gateway.invoke(Gateway.java:282)
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
    at py4j.commands.CallCommand.execute(CallCommand.java:79)
    at py4j.GatewayConnection.run(GatewayConnection.java:238)
    at java.base/java.lang.Thread.run(Thread.java:834)


During handling of the above exception, another exception occurred:

IllegalArgumentException                  Traceback (most recent call last)
<ipython-input-3-198b8ee6e584> in <module>()
      1 df3 = df2.select('col').na.drop()
----> 2 print(df3.toPandas())
      3 #print(df3.rdd.flatMap(lambda x: x).toPandas())

/usr/local/spark/python/pyspark/sql/dataframe.py in toPandas(self)
   2140 
   2141         # Below is toPandas without Arrow optimization.
-> 2142         pdf = pd.DataFrame.from_records(self.collect(), columns=self.columns)
   2143 
   2144         dtype = {}

/usr/local/spark/python/pyspark/sql/dataframe.py in collect(self)
    531         """
    532         with SCCallSiteSync(self._sc) as css:
--> 533             sock_info = self._jdf.collectToPython()
    534         return list(_load_from_socket(sock_info, BatchedSerializer(PickleSerializer())))
    535 

~/anaconda3/envs/en_env/lib/python3.5/site-packages/py4j/java_gateway.py in __call__(self, *args)
   1255         answer = self.gateway_client.send_command(command)
   1256         return_value = get_return_value(
-> 1257             answer, self.gateway_client, self.target_id, self.name)
   1258 
   1259         for temp_arg in temp_args:

/usr/local/spark/python/pyspark/sql/utils.py in deco(*a, **kw)
     77                 raise QueryExecutionException(s.split(': ', 1)[1], stackTrace)
     78             if s.startswith('java.lang.IllegalArgumentException: '):
---> 79                 raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)
     80             raise
     81     return deco

IllegalArgumentException: 'Unsupported class file major version 55'

共有1个答案

呼延卓
2023-03-14

你可以跑

sudo update-alternatives --config javac

若要选择不同的Java安装,请执行以下操作。Spark2.xx需要Java8。

 类似资料:
  • 我试图在所有节点都安装了Java1.7的集群上使用spark-submit运行java spark作业。 作业失败,返回java.lang.UnsupportedClassVersionError:com/WindLogics/DMF/Wether/MyClass:Unsupported Major.Minor版本51.0。 此外,当主机设置为Local时,作业也可以正常工作。如何进行调试和修复此

  • 我在Pyspark中使用UDF时遇到以下问题。 只要我不使用任何UDF,我的代码就能正常工作。执行简单的操作(如选择列)或使用sql函数(如concat)都没有问题。只要我对使用udf数据帧执行操作,程序就会崩溃,并出现以下异常: 我的代码里没有什么花哨的东西。我只是定义了一个简单的udf函数,它应该返回列“gender”中的一系列值。 我不确定这是否重要,但我在Mac上使用Pycharm。

  • 当我构建我的应用程序时,我得到了以下错误。任何帮助都将不胜感激。 根:生成。格拉德尔 settings.gradle 应用程序:生成。格拉德尔 失败:生成失败,出现异常*其中:设置文件“/Users/Documents/android/MyApplication5/Settings。gradle“*出了什么问题:无法编译设置文件”//Users/anand/Documents/android/My

  • 我正在尝试运行一个颤振应用程序,在尝试时出现了这个错误,我所做的是我想在没有android studio的情况下运行颤振应用程序,所以我下载了android SDK和JDK 17,在我运行颤振医生后,它告诉我一切都很好,我尝试运行,它给了我这个错误: 所以我尝试了一个旧版本的JDK,它给了我另一个错误,所以我看到了使用JDK 17运行的第三个解决方案,那就是降级gradle中的gradle。属性,

  • 我正在尝试重新构建和编译我的Android Studio项目,在为之前,它运行良好。将该库升级到3.9.1后,我遇到了此错误,我无法修复它。 task的执行失败:react-native-jumio-mobiles dk:compileDebugKotlin。 无法解析配置':react-native-jumio-mobilesdk:debugCompileClasspath'的所有文件。未能转换

  • 我正在eclipse 2021 03中运行一个spring web项目,并得到以下错误 ASM ClassReader无法解析类文件--可能是由于新的Java类文件版本尚不受支持:file[C:\user\eclipse-workspace.metadata.plugins\org.eclipse.wst.server.core\tmp0\wtpwebapps\web-customer-track