当前位置: 首页 > 知识库问答 >
问题:

异常线程"main"java.lang.NoClassDefFoundError: scala/Products$class(Java)

孔甫
2023-03-14

我运行了一个用Java编写的Spark流媒体程序来读取Kafka的数据,但是我遇到了这个错误,我试图找出可能是因为我使用scala或Java的版本太低。我使用了JDK版本15,但仍然出现了这个错误,有人能帮我解决这个错误吗?非常感谢。

这是我运行项目时的终端:

Exception in thread "main" java.lang.NoClassDefFoundError: scala/Product$class
        at org.apache.spark.streaming.kafka010.PreferConsistent$.<init>(LocationStrategy.scala:42)
        at org.apache.spark.streaming.kafka010.PreferConsistent$.<clinit>(LocationStrategy.scala)
        at org.apache.spark.streaming.kafka010.LocationStrategies$.PreferConsistent(LocationStrategy.scala:66)
        at org.apache.spark.streaming.kafka010.LocationStrategies.PreferConsistent(LocationStrategy.scala)
        at demo.KafkaDemo.main(KafkaDemo.java:47)
        at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:64)
        at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.base/java.lang.reflect.Method.invoke(Method.java:564)
        at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
        at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:951)
        at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:180)
        at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203)
        at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90)
        at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1030)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1039)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.lang.ClassNotFoundException: scala.Product$class
        at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:435)
        at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:589)
        at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:522)
        ... 17 more
21/05/31 14:42:51 INFO SparkContext: Invoking stop() from shutdown hook
21/05/31 14:42:51 INFO SparkUI: Stopped Spark web UI at http://192.168.1.24:4040
21/05/31 14:42:51 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
21/05/31 14:42:51 INFO MemoryStore: MemoryStore cleared
21/05/31 14:42:51 INFO BlockManager: BlockManager stopped
21/05/31 14:42:51 INFO BlockManagerMaster: BlockManagerMaster stopped
21/05/31 14:42:51 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
21/05/31 14:42:51 INFO SparkContext: Successfully stopped SparkContext
21/05/31 14:42:51 INFO ShutdownHookManager: Shutdown hook called
21/05/31 14:42:51 INFO ShutdownHookManager: Deleting directory /tmp/spark-9bf7b2b8-aa48-4d13-91d6-7efd096200ef
21/05/31 14:42:51 INFO ShutdownHookManager: Deleting directory /tmp/spark-0cabea66-391d-4376-b851-02b923209992

这是项目的文件pom.xml:

<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>TikiData</groupId>
<artifactId>TikiData</artifactId>
<version>V1</version>
<dependencies>
    <!-- https://mvnrepository.com/artifact/com.google.code.gson/gson -->
    <dependency>
        <groupId>com.google.code.gson</groupId>
        <artifactId>gson</artifactId>
        <version>2.8.6</version>
    </dependency>
    <!-- https://mvnrepository.com/artifact/org.apache.hadoop/hadoop-client -->
    <dependency>
        <groupId>org.apache.hadoop</groupId>
        <artifactId>hadoop-client</artifactId>
        <version>3.3.0</version>
    </dependency>
    <!-- https://mvnrepository.com/artifact/org.apache.spark/spark-sql -->
    <dependency>
        <groupId>org.apache.spark</groupId>
        <artifactId>spark-sql_2.12</artifactId>
        <version>3.1.1</version>
    </dependency>
    <!-- https://mvnrepository.com/artifact/org.apache.spark/spark-streaming-kafka-0-10 -->
    <dependency>
        <groupId>org.apache.spark</groupId>
        <artifactId>spark-streaming-kafka-0-10_2.11</artifactId>
        <version>2.0.0</version>
    </dependency>
    <!-- https://mvnrepository.com/artifact/org.apache.spark/spark-streaming -->
    <dependency>
        <groupId>org.apache.spark</groupId>
        <artifactId>spark-streaming_2.11</artifactId>
        <version>2.3.3</version>
        <scope>provided</scope>
    </dependency>
    <!-- https://mvnrepository.com/artifact/org.scala-lang/scala-library -->
    <dependency>
        <groupId>org.scala-lang</groupId>
        <artifactId>scala-library</artifactId>
        <version>2.12.12</version>
    </dependency>
    <!-- https://mvnrepository.com/artifact/org.scala-lang/scala-compiler -->
    <dependency>
        <groupId>org.scala-lang</groupId>
        <artifactId>scala-compiler</artifactId>
        <version>2.12.12</version>
    </dependency>
    <!-- https://mvnrepository.com/artifact/org.scala-lang/scala-reflect -->
    <dependency>
        <groupId>org.scala-lang</groupId>
        <artifactId>scala-reflect</artifactId>
        <version>2.12.13</version>
    </dependency>
    <!-- https://mvnrepository.com/artifact/org.apache.kafka/kafka-clients -->
    <dependency>
        <groupId>org.apache.kafka</groupId>
        <artifactId>kafka-clients</artifactId>
        <version>2.7.0</version>
    </dependency>
</dependencies>
<build>
    <sourceDirectory>src</sourceDirectory>
    <plugins>
        <plugin>
            <artifactId>maven-compiler-plugin</artifactId>
            <version>3.8.1</version>
            <configuration>
                <release>15</release>
            </configuration>
        </plugin>
        <plugin>
            <groupId>org.apache.maven.plugins</groupId>
            <artifactId>maven-assembly-plugin</artifactId>
            <executions>
                <execution>
                    <phase>package</phase>
                    <goals>
                        <goal>single</goal>
                    </goals>
                    <configuration>
                        <archive>
                            <manifest>
                                <mainClass>
                                    demo.KafkaDemo
                                </mainClass>
                            </manifest>
                        </archive>
                        <descriptorRefs>
                            <descriptorRef>jar-with-dependencies</descriptorRef>
                        </descriptorRefs>
                    </configuration>
                </execution>
            </executions>
        </plugin>
        <plugin>
            <groupId>net.alchim31.maven</groupId>
            <artifactId>scala-maven-plugin</artifactId>
            <version>4.4.0</version>
            <configuration>
                <scalaVersion>2.12.2</scalaVersion>
            </configuration>
        </plugin>
    </plugins>
</build>

这是项目的文件主体:

package demo;

import java.util.Arrays;
import java.util.Collection;
import java.util.HashMap;
import java.util.Map;

import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.common.serialization.StringDeserializer;
import org.apache.spark.SparkConf;
import org.apache.spark.api.java.function.FlatMapFunction;
import org.apache.spark.api.java.function.Function;
import org.apache.spark.api.java.function.Function2;
import org.apache.spark.api.java.function.PairFunction;
import org.apache.spark.streaming.Durations;
import org.apache.spark.streaming.api.java.JavaDStream;
import org.apache.spark.streaming.api.java.JavaInputDStream;
import org.apache.spark.streaming.api.java.JavaPairDStream;
import org.apache.spark.streaming.api.java.JavaStreamingContext;
import org.apache.spark.streaming.kafka010.ConsumerStrategies;
import org.apache.spark.streaming.kafka010.KafkaUtils;
import org.apache.spark.streaming.kafka010.LocationStrategies;

import scala.Tuple2;

public class KafkaDemo {
    public static void main(String[] args) throws InterruptedException {
        // Create a local StreamingContext and batch interval of 10 second
        SparkConf conf = new SparkConf().setMaster("local").setAppName("Kafka Spark Integration");
        JavaStreamingContext jssc = new JavaStreamingContext(conf, Durations.seconds(10));

        //Define Kafka parameter
        Map<String, Object> kafkaParams = new HashMap<String, Object>();
        kafkaParams.put("bootstrap.servers", "localhost:9092");
        kafkaParams.put("key.deserializer", StringDeserializer.class);
        kafkaParams.put("value.deserializer", StringDeserializer.class);
        kafkaParams.put("group.id", "0");
        // Automatically reset the offset to the earliest offset
        kafkaParams.put("auto.offset.reset", "earliest");
        kafkaParams.put("enable.auto.commit", false);

        //Define a list of Kafka topic to subscribe
        Collection<String> topics = Arrays.asList("hello-kafka");

        //Create an input Dstream which consume message from Kafka topics
        JavaInputDStream<ConsumerRecord<String, String>> stream;
        stream = KafkaUtils.createDirectStream(jssc,LocationStrategies.PreferConsistent(),ConsumerStrategies.Subscribe(topics, kafkaParams));


        // Read value of each message from Kafka
        JavaDStream<String> lines = stream.map((Function<ConsumerRecord<String, String>, String>) kafkaRecord -> kafkaRecord.value());

        // Split message into words
        JavaDStream<String> words = lines.flatMap((FlatMapFunction<String, String>) line -> Arrays.asList(line.split(" ")).iterator());

        // Take every word and return Tuple with (word,1)
        JavaPairDStream<String,Integer> wordMap = words.mapToPair((PairFunction<String, String, Integer>) word -> new Tuple2<>(word,1));

        // Count occurance of each word
        JavaPairDStream<String,Integer> wordCount = wordMap.reduceByKey((Function2<Integer, Integer, Integer>) (first, second) -> first+second);

        //Print the word count
        wordCount.print();

        // Start the computation
        jssc.start();
        jssc.awaitTermination();
    }
} 

共有1个答案

阎知
2023-03-14

Spark和Scala版本不匹配是导致这种情况的原因。如果使用以下一组依赖项,则应解决此问题。

我的一个观察(可能不是100%正确)是,如果我们有Spark-core_2.11(或任何Spark-xxxx_2.11),但scala库版本是2.12。X我总是遇到问题。容易记住的事情可能就像如果我们有Spark-xxxx_2.11,然后使用scala库2.11。x但不是2.12。X.

请将scala reflectscala compile版本也修改为2.11。十、

    <dependency>
        <groupId>org.apache.spark</groupId>
        <artifactId>spark-core_2.11</artifactId>
        <version>2.4.2</version>
    </dependency>
    <dependency>
        <groupId>org.apache.spark</groupId>
        <artifactId>spark-sql_2.11</artifactId>
        <version>2.4.2</version>
    </dependency>
    <dependency>
        <groupId>org.apache.spark</groupId>
        <artifactId>spark-sql-kafka-0-10_2.11</artifactId>
        <version>2.4.2</version>
    </dependency>
    <dependency>
        <groupId>org.apache.spark</groupId>
        <artifactId>spark-streaming_2.11</artifactId>
        <version>2.4.2</version>
    </dependency>
    <dependency>
        <groupId>org.apache.spark</groupId>
        <artifactId>spark-mllib_2.11</artifactId>
        <version>2.4.2</version>
    </dependency>
    <dependency>
        <groupId>org.scala-lang</groupId>
        <artifactId>scala-library</artifactId>
        <version>2.11.8</version>
    </dependency>
 类似资料:
  • 我正在学习selenium并尝试运行以下代码,但它引发了异常。NoSuchFieldError:空字节数组。请帮助我理解这个简单的程序出了什么问题。 控制台: 异常线程"main"java.lang.NoSuchFieldError:EMPTY_BYTE_ARRAY

  • 最近我正在学习Spring框架。所以我正在尝试检查依赖注入在Spring框架中的工作原理。因此,我创建了一个新的java项目并使用基于构造函数XML的配置练习依赖注入代码。运行我的项目后,我收到了这个错误...... 类路径资源[com/mir00r/beans.XML]的XML文档中的第24行无效;嵌套异常为组织。xml。萨克斯。SAXParseException;行号:24;列数:9;cvc复

  • 我正在尝试创建一个实用程序类,以使用 java.util.扫描仪从标准控制台获取输入 在另一个类中调用静态方法时, 工作正常,但是下面的方法 会导致异常。 运行此命令会导致以下异常 我注意到的有趣的事情是,如果我注释掉第一个输入过程,数组输入工作正常。

  • 我正在看一张桌子。其中一列包含压缩(二进制)数据。我正在解压缩这些数据并存储在另一个RDD中。它给出了以下错误: 线程“main”组织中出现异常。阿帕奇。火花SparkException:作业因阶段失败而中止:阶段2.0中的任务0失败1次,最近的失败:阶段2.0中的任务0.0丢失(TID 2,localhost,executor driver):java。util。同时发生的ExecutionEx

  • 问题内容: 每当我运行此命令时,该函数就可以正常使用。当我选择洞穴时,消息会每隔2秒弹出一次,然后当它越过该部分时,就会出现错误: 我已经尝试过和,并且在该方法中使用时,出现了很多错误。当我在方法中使用时,它不接受我的输入。 当我在该方法中使用时,它不接受我的字符串输入,而直接进入另一个游戏,但是布尔值返回并且它无限地发送垃圾邮件“ Which Cave …”。 我已经阅读了错误报告,以及类似问题

  • 问题内容: 我正在开发一个访问数据库的项目,但是我遇到了一些问题。我尝试使用hibernate3.2和4.52,但是它不起作用。 例外是在这行代码中 问题答案: 您需要在类路径中检查类org.apache.log4j.Level的冲突版本并进行解决。版本1.2.12或更高版本的log4j jar中提供了TRACE级别。