当前位置: 首页 > 知识库问答 >
问题:

sbt程序集失败并出现错误:对象spark不是包org.apache的成员,尽管包含spark-core和spark-sql库

濮阳弘扬
2023-03-14

我正试图在一个spark项目上使用sbt程序集。sbt编译和打包工作,但当我尝试sbt汇编时,我得到以下错误:

name := "redis-record-loader"

scalaVersion := "2.11.8"

val sparkVersion = "2.3.1"
val scalatestVersion = "3.0.3"
val scalatest = "org.scalatest" %% "scalatest" % scalatestVersion

libraryDependencies ++=
  Seq(
    "com.amazonaws" % "aws-java-sdk-s3" % "1.11.347",
    "com.typesafe" % "config" % "1.3.1",
    "net.debasishg" %% "redisclient" % "3.0",
    "org.slf4j" % "slf4j-log4j12" % "1.7.12",
    "org.apache.commons" % "commons-lang3" % "3.0" % "test,it",
    "org.apache.hadoop" % "hadoop-aws" % "2.8.1" % Provided,
    "org.apache.spark" %% "spark-core" % sparkVersion % Provided,
    "org.apache.spark" %% "spark-sql" % sparkVersion % Provided,
    "org.mockito" % "mockito-core" % "2.21.0" % Test,
    scalatest
)

val integrationTestsKey = "it"
val integrationTestLibs = scalatest % integrationTestsKey

lazy val IntegrationTestConfig = config(integrationTestsKey) extend Test

lazy val root = project.in(file("."))
  .configs(IntegrationTestConfig)
  .settings(inConfig(IntegrationTestConfig)(Defaults.testSettings): _*)
  .settings(libraryDependencies ++= Seq(integrationTestLibs))

test in assembly := Seq(
  (test in Test).value,
  (test in IntegrationTestConfig).value
)

assemblyMergeStrategy in assembly := {
    case PathList("META-INF", xs @ _*) => MergeStrategy.discard
    case x => MergeStrategy.first
}

plugins.sbt:

logLevel := Level.Warn

addSbtPlugin("com.eed3si9n" % "sbt-assembly" % "0.14.6")

完整错误消息:

/com/elsevier/bos/RedisRecordLoaderIntegrationSpec.scala:11: object spark is not a member of package org.apache
[error] import org.apache.spark.sql.{DataFrame, SaveMode, SparkSession}
[error]                   ^
[error] /Users/jones8/Work/redis-record-loader/src/it/scala/com/elsevier/bos/RedisRecordLoaderIntegrationSpec.scala:26: not found: type SparkSession
[error]   implicit val spark: SparkSession = SparkSession.builder
[error]                       ^
[error] /Users/jones8/Work/redis-record-loader/src/it/scala/com/elsevier/bos/RedisRecordLoaderIntegrationSpec.scala:26: not found: value SparkSession
[error]   implicit val spark: SparkSession = SparkSession.builder
[error]                                      ^
[error] /Users/jones8/Work/redis-record-loader/src/it/scala/com/elsevier/bos/RedisRecordLoaderIntegrationSpec.scala:51: not found: type DataFrame
[error]   val testDataframe0: DataFrame = testData0.toDF()
[error]                       ^
[error] /Users/jones8/Work/redis-record-loader/src/it/scala/com/elsevier/bos/RedisRecordLoaderIntegrationSpec.scala:51: value toDF is not a member of Seq[(String, String)]
[error]   val testDataframe0: DataFrame = testData0.toDF()
[error]                                             ^
[error] /Users/jones8/Work/redis-record-loader/src/it/scala/com/elsevier/bos/RedisRecordLoaderIntegrationSpec.scala:52: not found: type DataFrame
[error]   val testDataframe1: DataFrame = testData1.toDF()
[error]                       ^
[error] /Users/jones8/Work/redis-record-loader/src/it/scala/com/elsevier/bos/RedisRecordLoaderIntegrationSpec.scala:52: value toDF is not a member of Seq[(String, String)]
[error]   val testDataframe1: DataFrame = testData1.toDF()
[error]                                             ^
[error] missing or invalid dependency detected while loading class file 'RedisRecordLoader.class'.
[error] Could not access term spark in package org.apache,
[error] because it (or its dependencies) are missing. Check your build definition for
[error] missing or conflicting dependencies. (Re-run with `-Ylog-classpath` to see the problematic classpath.)
[error] A full rebuild may help if 'RedisRecordLoader.class' was compiled against an incompatible version of org.apache.
[error] missing or invalid dependency detected while loading class file 'RedisRecordLoader.class'.
[error] Could not access type SparkSession in value org.apache.sql,
[error] because it (or its dependencies) are missing. Check your build definition for
[error] missing or conflicting dependencies. (Re-run with `-Ylog-classpath` to see the problematic classpath.)
[error] A full rebuild may help if 'RedisRecordLoader.class' was compiled against an incompatible version of org.apache.sql.
[error] 9 errors found

共有1个答案

笪成周
2023-03-14

我不能对此发表评论,我可以说“我怀疑AWS SDK和hadoop-aws版本是否能够工作”。您需要hadoop-aws的确切版本来匹配CP上的hadoop-common JAR,(毕竟,这都是一个同步发布的项目),构建的aws SDK版本是1.10。AWS SDK有一个习惯:(a)在每个point发行版上破坏API;(b)积极推动jackson的新版本,即使它们不兼容;(c)在hadoop-aws代码中造成倒退。

如果您真的想使用S3A,最好使用Hadoop-2.9,它支持带有阴影的1.11.x版本

 类似资料:
  • 在为一个小型Spark Scala应用程序从命令行运行“sbt包”时,我在以下代码行中得到“value$is not a member of StringContext”编译错误: Intellij 13.1给了我同样的错误消息。相同的。scala源代码在Eclipse 4.4.2中编译时没有任何问题。此外,它在命令行的一个单独的maven项目中与maven配合良好。 sbt似乎无法识别$符号,因

  • 这是我的源代码,其中Im从服务器端获取一些数据,服务器端不断生成数据流。然后,对于每个RDD,我应用SQL模式,一旦创建了这个表,我就会尝试从这个数据流中选择一些东西。 但是它抛出了这个可序列化的异常,尽管我使用的类确实实现了序列化。

  • 我正在Intellij上运行一个Spark程序,并得到以下错误:“object apache不是包org的成员”。 我在代码中使用了这些导入语句: 上面的导入语句也没有在sbt提示符上运行。相应的库似乎丢失了,但我不确定如何复制相同的库以及在哪个路径上。

  • 我得到了一个错误:- 线程“main”java.lang.nosuchmethoderror:com.datastax.driver.core.queryoptions.setrefreshnodeintervalmillis(I)lcom/datastax/driver/core/queryoptions;**在com.datastax.spark.connector.cql.defaultCo

  • 主要内容:1.RDD特点:,2.RDD的 5大属性,3.RDD的执行原理,4.Spark的核心组件1.RDD特点: 可变: 存储的弹性 容错的弹性 计算的弹性 分片的弹性 RDD 代码中是一个抽象类, 代表弹性的, 不可变, 可分区, 里面的元素可并行计算的集合, 为弹性分布式数据集。 RDD 不保存数据, 但是有血缘关系。 不可变的是逻辑, 如果想加入新的逻辑, 必须封装。 2.RDD的 5大属性 分区列表 分区计算函数 多个RDD有依赖关系 分区器: 一个分区的规则, 和Kafka 类似

  • 我们在使用新安装的CDH 5.5.2群集的spark standalone群集时经常遇到错误。我们有7个工作节点,每个节点有16 GB内存。但是,几乎所有连接都失败了。 我已经确保我分配了完整的内存与执行器内存,并确保它已经分配了这么多的内存,并通过验证它在火花UI。 我们的大多数错误如下。我们已经检查了我们这边的情况。但我们的解决方案都没有奏效。 > /tmp有777个权限,但它仍然告诉as/t