当前位置: 首页 > 知识库问答 >
问题:

使用java源代码进行scala-lib调用的Spark-Cassandra Maven项目

酆景辉
2023-03-14

4)简而言之:[Oracle DB]<---[Spark]----[spark-cassandra-connector]-->[Cassandra]

我在Java代码中调用Scala-lib时遇到的问题(上面的步骤1);更具体地说,在load函数调用期间:DataFrame jdbcDF=sqlcontext.load(“jdbc”,选项);

运行时错误:java.lang.ClassNotFoundException:Scala.collection.GentRaversableOnce$Class“

<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>
    <groupId>com.dev</groupId>
    <artifactId>spark-cassandra</artifactId>
    <version>0.0.1-SPARK-CASSANDRA</version>

    <dependencies>
        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-core_2.11</artifactId>
            <version>1.3.1</version>
        </dependency>

        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-sql_2.11</artifactId>
            <version>1.3.1</version>
        </dependency>

        <dependency>
            <groupId>mysql</groupId>
            <artifactId>mysql-connector-java</artifactId>
            <version>5.1.35</version>
        </dependency>
	<dependency>
	    <groupId>com.oracle</groupId>
	    <artifactId>ojdbc6</artifactId>
	    <version>11.2.0</version>
	</dependency>
	
	
	<dependency>
	    <groupId>com.datastax.spark</groupId>
	    <artifactId>spark-cassandra-connector_2.10</artifactId>
	    <version>1.0.0-rc4</version>
	</dependency>
	<dependency>
	    <groupId>com.datastax.spark</groupId>
	    <artifactId>spark-cassandra-connector-java_2.10</artifactId>
	    <version>1.0.0-rc4</version>
	</dependency>
	<dependency>
	    <groupId>com.datastax.cassandra</groupId>
	    <artifactId>cassandra-driver-core</artifactId>
	    <version>2.1.5</version>
	</dependency>
	
	<dependency>
	    <groupId>org.apache.spark</groupId>
	    <artifactId>spark-streaming_2.10</artifactId>
	    <version>1.3.1</version>
	</dependency>
	<dependency>
	    <groupId>com.dev.cassandra</groupId>
	    <artifactId>spark-cassandra</artifactId>
	    <version>1.0</version>
	</dependency>
	
	<dependency>	
	    <groupId>org.scala-lang</groupId>
	    <artifactId>scala-library</artifactId>
	    <version>2.10.3</version>
	</dependency>
	
	<dependency>
	<groupId>org.scala-lang</groupId>
	<artifactId>scala-compiler</artifactId>
	<version>2.10.3</version>
	</dependency>
	<!--
	<dependency>
	<groupId>org.scala-lang</groupId>
	<artifactId>scala-reflect</artifactId>
	<version>2.10.0-M1</version>
	</dependency>
	-->

		
	<!--
	
	<dependency>
	    <groupId>org.scala-lang</groupId>
	    <artifactId>scala-swing</artifactId>
	    <version>2.10.0-M1</version>
	</dependency>
	-->
	
    </dependencies>
   

    <build>
        <pluginManagement>
	  
            <plugins>
                <plugin>
                    <groupId>net.alchim31.maven</groupId>
                    <artifactId>scala-maven-plugin</artifactId>
                    <version>3.1.5</version>
                </plugin>
		<plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-compiler-plugin</artifactId>
                <version>3.3</version>
                <configuration>
                    <source>1.7</source>
                    <target>1.7</target>
		    <mainClass>com.dev.cassandra.Main</mainClass>
		    <cleanupDaemonThreads>false</cleanupDaemonThreads>
                    <compilerArgument>-Xlint:all</compilerArgument>
                    <showWarnings>true</showWarnings>
                    <showDeprecation>true</showDeprecation>
                </configuration>
	      </plugin>
            </plugins>
        </pluginManagement>

        <plugins>

            <plugin>
                <groupId>net.alchim31.maven</groupId>
                <artifactId>scala-maven-plugin</artifactId>
                <executions>
                    <execution>
                        <id>scala-compile-first</id>
                        <phase>process-resources</phase>
                        <goals>
                            <goal>add-source</goal>
                            <goal>compile</goal>
                        </goals>
                    </execution>
                    <execution>
                        <id>scala-test-compile</id>
                        <phase>process-test-resources</phase>
                        <goals>
                            <goal>testCompile</goal>
                        </goals>
                    </execution>
                </executions>
            </plugin>

            <!-- Plugin to create a single jar that includes all dependencies 
            <plugin>
		<groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-assembly-plugin</artifactId>
                <version>2.4</version>
                <configuration>
                    <descriptorRefs>
                        <descriptorRef>jar-with-dependencies</descriptorRef>
                    </descriptorRefs>
                    <archive>
                        <manifest>
                            <mainClass>com.dev.cassandra.Main</mainClass>
                        </manifest>
                    </archive>
                </configuration>
                <executions>
                    <execution>
			<id>make-assembly</id>
                        <phase>package</phase>
                        <goals>
                            <goal>single</goal>
                        </goals>
                    </execution>
                </executions>
            </plugin>
		-->
        </plugins>
    </build>				

</project>

JAVA代码:

html prettyprint-override">package com.dev.cassandra;

import java.io.Serializable;
import java.util.HashMap;
import java.util.List;
import java.util.Map;

import java.sql.*;

import org.apache.spark.*;
import org.apache.spark.SparkConf;
import org.apache.spark.api.java.*;	
import org.apache.spark.api.java.JavaSparkContext;
import org.apache.spark.sql.DataFrame;
import org.apache.spark.sql.Row;
import org.apache.spark.sql.SQLContext;
import org.apache.spark.sql.types.DataTypes;
import org.apache.spark.sql.types.StructField;
import org.apache.spark.sql.types.StructType;

import oracle.jdbc.*;

import com.datastax.spark.connector.cql.CassandraConnector;
import static com.datastax.spark.connector.CassandraJavaUtil.*;

public class Main implements Serializable {

    private static final org.apache.log4j.Logger LOGGER = org.apache.log4j.Logger.getLogger(Main.class);

    private static final String JDBC_DRIVER = "oracle.jdbc.driver.OracleDriver";
    private static final String JDBC_USERNAME = "XXXXXO01";
    private static final String JDBC_PWD = "XXXXXO01";
    private static final String JDBC_CONNECTION_URL =
            "jdbc:oracle:thin:" + JDBC_USERNAME + "/" + JDBC_PWD + "@CONNECTION VALUES";

    private transient SparkConf conf;
  
    private Main(SparkConf conf) {
        this.conf = conf;
    }
  
    private void run() {
        JavaSparkContext sc = new JavaSparkContext(conf);
        SQLContext sqlContext = new SQLContext(sc);
        generateData(sc);
        compute(sc);
        showResults(sc);
        sc.stop();
    }
  
    private void generateData(JavaSparkContext sc) {
    
      SQLContext sqlContext = new org.apache.spark.sql.SQLContext(sc);
      System.out.println("AFTER SQL CONTEXT");
      
      //Data source options
      Map<String, String> options = new HashMap<>();
      options.put("driver", JDBC_DRIVER);
      options.put("url", JDBC_CONNECTION_URL);
      options.put("dbtable","(SELECT * FROM XXX_SAMPLE_TABLE WHERE ROWNUM <=5)");
      
      CassandraConnector connector = CassandraConnector.apply(sc.getConf());
      
      try{	
	Class.forName(JDBC_DRIVER);
	 
	System.out.println("BEFORE jdbcDF"); 
	 
        //Load JDBC query result as DataFrame
        DataFrame jdbcDF = sqlContext.load("jdbc", options);
        System.out.println("AFTER jdbcDF");

        List<Row> tableRows = jdbcDF.collectAsList();
        
        System.out.println("AFTER tableRows");

        for (Row tableRow : tableRows) {
	    System.out.println();
            LOGGER.info(tableRow);
            System.out.println();
        }
        
	}catch(Exception e){
	  //Handle errors for Class.forName
	  e.printStackTrace();
	}
    }
  
    private void compute(JavaSparkContext sc) {
    }
  
    private void showResults(JavaSparkContext sc) {
    }
    
    
    public static void main(String[] args) throws InterruptedException 
    {
	
      if (args.length != 2) {
            System.err.println("Syntax: com.datastax.spark.dev.cassandra <Spark Master URL> <Cassandra contact point>");
            System.exit(1);
        }
  
	//JavaSparkContext sc = new JavaSparkContext(new SparkConf().setAppName("SparkJdbcDs").setMaster("local[*]"));
	SparkConf conf = new SparkConf().setAppName("SparkJdbcDs").setMaster("local[*]");

	
	//SparkConf conf = new SparkConf();
        //conf.setAppName("SparkJdbcDs");
        //conf.setMaster(args[0]);
        //conf.set("spark.cassandra.connection.host", args[1]);
  
        Main app = new Main(conf);
        app.run();
	
    }
}

提前感谢!

共有1个答案

易镜
2023-03-14

pom.xml正在请求一些Spark Jar的Scala2.11版本

<artifactId>spark-core_2.11</artifactId>

<artifactId>spark-sql_2.11</artifactId>

以及另一个Spark JAR和Cassandra connector JAR的Scala2.10版本。

<artifactId>spark-streaming_2.10</artifactId>

<artifactId>spark-cassandra-connector_2.10</artifactId>

<artifactId>spark-cassandra-connector-java_2.10</artifactId>

(基于Scala工件ID的命名约定,Scala工件ID以您希望构建它们的Scala版本结束。)

 类似资料:
  • 问题内容: 我遵循了以下教程http://developer.android.com/guide/developing/projects/projects- eclipse.html 我有两个项目:入门项目和库项目。大多数源代码都位于库项目中 当我调试android应用程序并在Java中放置一个断点时,该断点可以工作,但它会跳到(起始项目的)“库”项目文件夹中的类文件中。 我想停止使用Java代码

  • 线程“main”java.lang.UnsatifiedLinkError:查找函数“StartConector”时出错:找不到指定的过程。在com.sun.jna.function.(function.java:179)在com.sun.jna.nativelibrary.getfunction(nativelibrary.java:350)在com.sun.jna.nativelibrary.

  • 问题内容: 我使用以下Dockerfile创建了一个Spark容器: 我还有两个用Scala编程语言编写的文件,这对我来说听起来很新。问题在于容器只知道Java,而没有安装任何其他命令。有什么方法可以在容器上没有安装任何程序的情况下运行Scala? 文件名是和。这是initDocuments.scala文件: 我也尝试了以下方法,但不起作用。 错误: PS: 我试图使用以下命令来更改代理地址,但我

  • 我在网上搜索并找到了以下资源,我尝试了这些资源(请参见pom),但无法工作: 1)Spark用户邮件列表:http://apache-spark-user-list.1001560.n3.nabble.com/packaging-a-spark-job-using-maven-td5615.html 2)如何打包spark scala应用程序 我有一个简单的例子来演示这个问题,一个简单的1类项目(

  • 我是python和anaconda的新手,我安装和设置了包括环境变量在内的所有东西。然后我打开vs代码并键入 并尝试调试。我没有得到任何输出,调试器有几个超时。这是我运行调试器后在终端上的结果: (基本)c:\users\arun>cd e:\pythontutorials&&cmd/c“set”pythonioencoding=utf-8“&&set”pythonunbuffered=1“&&e

  • 问题内容: 我正在尝试从Java代码中使用代码,原因是它在Eclipse中不起作用,而Scala则可以。但我无法获得方法 之所以能够正常工作,是因为它似乎期望第二个参数使用a,而且我看不到如何在Java中创建a 。我该如何解决? 我尝试过的事情: 1)使用null -获得奖励。 2)替换为with ,但是javac报告各种错误,例如没有方法。 3)在包对象中使用该对象,但此处建议的语法为,但无法解