当前位置: 首页 > 知识库问答 >
问题:

spark-cassandra连接器的Spark cassandra集成错误

翁鸿远
2023-03-14
  bin/spark-submit --packages datastax:spark-cassandra-connector:1.6.0-s_2.10
     --class "pl.japila.spark.SparkMeApp" --master local  /home/hduser2/code14/target/scala-2.10/simple-project_2.10-1.0.jar
**name := "Simple Project"    
version := "1.0"     
scalaVersion := "2.10.4"      
libraryDependencies += "org.apache.spark" %% "spark-core" % "1.6.0"     
libraryDependencies += "org.apache.spark" %% "spark-sql" % "1.6.0"     
resolvers += "Spark Packages Repo" at "https://dl.bintray.com/spark-packages/maven"       
libraryDependencies += "datastax" % "spark-cassandra-connector" % "1.6.0-s_2.10"       
libraryDependencies ++= Seq(        
  "org.apache.cassandra" % "cassandra-thrift" % "3.5" ,      
  "org.apache.cassandra" % "cassandra-clientutil" % "3.5",      
  "com.datastax.cassandra" % "cassandra-driver-core" % "3.0.0"    
)**
package pl.japila.spark     
import org.apache.spark.sql._    
import com.datastax.spark.connector._    
import com.datastax.driver.core._     
import com.datastax.spark.connector.cql._   
import org.apache.spark.{SparkContext, SparkConf}    
import com.datastax.driver.core.QueryOptions._    
import org.apache.spark.SparkConf    
import com.datastax.driver.core._   
import com.datastax.spark.connector.rdd._   

object SparkMeApp {
  def main(args: Array[String]) {

 val conf = new SparkConf(true).set("spark.cassandra.connection.host", "127.0.0.1")

  val sc = new SparkContext("local", "test", conf)    
  val sqlContext = new org.apache.spark.sql.SQLContext(sc)    
  val rdd = sc.cassandraTable("test", "kv")    
  val collection = sc.parallelize(Seq(("cat", 30), ("fox", 40)))

collection.saveToCassandra("test", "kv", SomeColumns("key", "value"))
  }
}

我得到了一个错误:-

线程“main”java.lang.nosuchmethoderror:com.datastax.driver.core.queryoptions.setrefreshnodeintervalmillis(I)lcom/datastax/driver/core/queryoptions;**在com.datastax.spark.connector.cql.defaultConnectionFactory$.ClusterBuilder(cassandraconnectionFactory:49)在com.datastax.spark.connector.cql.在com.datastax.spark.connector.cql.cassandraconnector$$anonfun$3处应用(Cassandraconnector.scala:148)在com.datastax.spark.connector.cql.cassandraconnector$$anonfun$3处应用(Cassandraconnector.scala:148)在com.datastax.spark.connector.cql.cassandraconnector.scala:148)在在com.datastax.spark.connector.cql.Cassandraconnector.opensess获取(refcountedcache.scala:56)ion(cassandraconnector.scala:81)在com.datastax.spark.connector.cql.cassandraconnector.withsessiondo(cassandraconnector.scala:109)

使用的版本有:-
Spark-1.6.0
Scala-2.10.4
cassandra-driver-core JAR-3.0.0
cassandra version 2.2.7
spark-cassandra Connector-1.6.0-S2.10

来人帮帮我!!

共有1个答案

申屠乐池
2023-03-14

我会从移除

libraryDependencies ++= Seq(        
  "org.apache.cassandra" % "cassandra-thrift" % "3.5" ,      
  "org.apache.cassandra" % "cassandra-clientutil" % "3.5",      
  "com.datastax.cassandra" % "cassandra-driver-core" % "3.0.0"    
)

因为作为连接器依赖项的库将自动包含在包依赖项中。

然后,我将通过启动spark-shell来测试包的分辨率

./bin/spark-shell --packages datastax:spark-cassandra-connector:1.6.0-s_2.10
datastax#spark-cassandra-connector added as a dependency
:: resolving dependencies :: org.apache.spark#spark-submit-parent;1.0
        confs: [default]
        found datastax#spark-cassandra-connector;1.6.0-s_2.10 in spark-packages
        found org.apache.cassandra#cassandra-clientutil;3.0.2 in list
        found com.datastax.cassandra#cassandra-driver-core;3.0.0 in list
        ...
        [2.10.5] org.scala-lang#scala-reflect;2.10.5
:: resolution report :: resolve 627ms :: artifacts dl 10ms
        :: modules in use:
        com.datastax.cassandra#cassandra-driver-core;3.0.0 from list in [default]
        com.google.guava#guava;16.0.1 from list in [default]
        com.twitter#jsr166e;1.1.0 from list in [default]
        datastax#spark-cassandra-connector;1.6.0-s_2.10 from spark-packages in [default]
        ...
 类似资料:
  • 注意,这里是每个cassandra分区的限制,而不是每个spark分区的限制(连接器中现有的限制函数支持这一点)。 spark 2.0.1,连接器-2.0.0-M3

  • 我试图在卡桑德拉的一小部分数据上运行一个火花工作。我手头有一个键的RDD(分区和集群列),我只想在这些键上运行我的作业。 我在BoundStatementBuilder上收到以下错误:19

  • **dataframe2:从另一个来源获得的键的Dataframe(这些键是上表中ID列的分区键)-此表中不同键的数量约为0.15万** 现在,此代码总是导致“com.datastax.oss.driver.api.core.servererrors.ReadFailureException:在一致性LOCAL_ONE读取查询期间Cassandra失败(需要1个响应,但只有0个副本响应,1个失败)

  • 我在试着让DataStax spark cassandra连接器工作。我在IntelliJ中创建了一个新的SBT项目,并添加了一个类。下面给出了类和我的sbt文件。创建spark上下文似乎可以工作,但是,当我取消注释试图创建cassandraTable的行时,我得到了以下编译错误: 错误:Scalac:错误的符号引用。Cassandrarow.class中的签名引用了包org.apache.spa

  • 有人能解释一下DataStax Cassandra-Spark连接器最近的版本历史中发生了什么吗?