每次使用cassandra connector在spark中运行scala程序时都会出现此错误
Exception during preparation of SELECT count(*) FROM "eventtest"."simpletbl" WHERE token("a") > ? AND token("a") <= ?
ALLOW FILTERING: class org.joda.time.DateTime in JavaMirror with org.apache.spark.util.MutableURLClassLoader@23041911 of type class org.apache.spark.util.MutableURLClassLoader
with classpath
[file:
/home/sysadmin/ApacheSpark/spark-1.4.0-bin-hadoop2.4/work/app-20150711142923-0023/0/./spark-cassandra-connector_2.10-1.4.0-M1.jar
,file:
/home/sysadmin/ApacheSpark/spark-1.4.0-bin-hadoop2.4/work/app-20150711142923-0023/0/./cassandra-driver-core-2.1.5.jar,file:
/home/sysadmin/ApacheSpark/spark-1.4.0-bin-hadoop2.4/work/app-20150711142923-0023/0/./cassandra-spark-job_2.10-1.0.jar,file:
/home/sysadmin/ApacheSpark/spark-1.4.0-bin-hadoop2.4/work/app-20150711142923-0023/0/./guava-18.0.jar,file:
/home/sysadmin/ApacheSpark/spark-1.4.0-bin-hadoop2.4/work/app-20150711142923-0023/0/./joda-convert-1.2.jar,file:
/home/sysadmin/ApacheSpark/spark-1.4.0-bin-hadoop2.4/work/app-20150711142923-0023/0/./cassandra-clientutil-2.1.5.jar,file:
/home/sysadmin/ApacheSpark/spark-1.4.0-bin-hadoop2.4/work/app-20150711142923-0023/0/./google-collections-1.0.jar] and parent being sun.misc.Launcher$AppClassLoader@6132b73b of type class sun.misc.Launcher$AppClassLoader with classpath [file:
/home/sysadmin/ApacheSpark/spark-1.4.0-bin-hadoop2.4/conf/,file:
/home/sysadmin/ApacheSpark/spark-1.4.0-bin-hadoop2.4/lib/spark-assembly-1.4.0-hadoop2.4.0.jar,file:
/home/sysadmin/ApacheSpark/spark-1.4.0-bin-hadoop2.4/lib/datanucleus-api-jdo-3.2.6.jar,file:
/home/sysadmin/ApacheSpark/spark-1.4.0-bin-hadoop2.4/lib/datanucleus-core-3.2.10.jar,file:
/home/sysadmin/ApacheSpark/spark-1.4.0-bin-hadoop2.4/lib/datanucleus-rdbms-3.2.9.jar] and parent being sun.misc.Launcher$ExtClassLoader@489bb457 of type class sun.misc.Launcher$ExtClassLoader with classpath [file:
/usr/lib/jvm/java-7-openjdk-amd64/jre/lib/ext/dnsns.jar,file:
/usr/lib/jvm/java-7-openjdk-amd64/jre/lib/ext/sunpkcs11.jar,file:
/usr/lib/jvm/java-7-openjdk-amd64/jre/lib/ext/sunjce_provider.jar,file:
/usr/lib/jvm/java-7-openjdk-amd64/jre/lib/ext/zipfs.jar,file:
/usr/lib/jvm/java-7-openjdk-amd64/jre/lib/ext/libatk-wrapper.so,file:
/usr/lib/jvm/java-7-openjdk-amd64/jre/lib/ext/java-atk-wrapper.jar,file:
/usr/lib/jvm/java-7-openjdk-amd64/jre/lib/ext/localedata.jar,file:
/usr/lib/jvm/java-7-openjdk-amd64/jre/lib/ext/icedtea-sound.jar] and parent being primordial classloader with boot classpath [/usr/lib/jvm/java-7-openjdk-amd64/jre/lib/resources.jar:/usr/lib/jvm/java-7-openjdk-amd64/jre/lib/rt.jar:/usr/lib/jvm/java-7-openjdk-amd64/jre/lib/sunrsasign.jar:/usr/lib/jvm/java-7-openjdk-amd64/jre/lib/jsse.jar:/usr/lib/jvm/java-7-openjdk-amd64/jre/lib/jce.jar:/usr/lib/jvm/java-7-openjdk-amd64/jre/lib/charsets.jar:/usr/lib/jvm/java-7-openjdk-amd64/jre/lib/rhino.jar:/usr/lib/jvm/java-7-openjdk-amd64/jre/lib/jfr.jar:/usr/lib/jvm/java-7-openjdk-amd64/jre/classes] not found.
at com.datastax.spark.connector.rdd.CassandraTableScanRDD.createStatement(CassandraTableScanRDD.scala:163)
这是我的程序
/** CassandraJob.scala **/ import com.datastax.spark.connector._ import org.apache.spark._ object CassandraJob { def main(args: Array[String]) { val conf = new SparkConf(true) .set("spark.cassandra.connection.host", "172.28.0.164") .set("soark.cassandra.connection.rpc.port", "9160") val sc = new SparkContext(conf) val rdd = sc.cassandraTable("eventtest", "simpletbl"); println("cassandra row count : " + rdd.count + ", cassanra row : " + rdd.first) } }
./bin/spark-submit --jars $(echo /home/sysadmin/ApacheSpark/jar/*.jar | tr ' ' ',') --class "CassandraJob" --master spark://noi-cs-01:7077 /home/sysadmin/ApacheSparkProj/CassandraJob/target/scala-2.10/cassandra-spark-job_2.10-1.0.jar
我猜您使用的是org.joda.time.dateTime
,而它在提交的JAR中是缺失的。只需将这个jar添加到您的依赖项中:...--jars$(echo/home/sysadmin/apachespark/jar/*.jar tr“”“,”),/path/to/downloaded/jodatime/jar--类“cassandrajob...”
另一种方法是使用sbt assembly
插件而不是sbt Package
将org.joda.time.datetime
包含在sbt的库依赖项中,并与该库一起使用程序集fat jar。
IOException:找不到键类'com.test.serializetest.toto'的序列化程序。如果使用自定义序列化,请确保配置“io.serializations”配置正确。在org.apache.hadoop.io.sequenceFile$writer.init(sequenceFile.java:1179)在org.apache.hadoop.io.sequenceFile$wr
我正在使用一个火花流作业,它使用带有初始RDD的mapAnd State。当重新启动应用程序并从检查点恢复时,它会失败,出错: 此RDD缺少SparkContext。它可能发生在以下情况: RDD转换和操作不是由驱动程序调用的,而是在其他转换内部调用的;例如,rdd1.map(x= 中描述了此行为https://issues.apache.org/jira/browse/SPARK-13758但它
编辑1 当选择正确的scala版本时,它似乎会更进一步,但我不确定下面的输出是否仍然有需要解决的错误:
我正在运行一个火花作业,我在spark-defaults.sh.设置了以下配置,我在名称节点中有以下更改。我有1个数据节点。我正在处理2GB的数据。 但我得到一个错误,说GC限制超过。 这是我正在编写的代码。 我甚至尝试了GroupByKey而不是也。但是我得到了同样的错误。但是,当我试图删除还原ByKey或GroupByKey我得到的输出。有人能帮我解决这个错误吗? 我是否也应该在hadoop中
我正在使用Spark-Cassandra连接器1.1.0和Cassandra 2.0.12。 谢谢, 沙伊
我有一些Spark经验,但刚开始使用Cassandra。我正在尝试进行非常简单的阅读,但性能非常差——不知道为什么。这是我正在使用的代码: 所有3个参数都是表上键的一部分: 主键(group\u id,epoch,group\u name,auto\u generated\u uuid\u field),聚类顺序为(epoch ASC,group\u name ASC,auto\u generat