当前位置: 首页 > 知识库问答 >
问题:

使用sbt解析spark依赖关系

楚俊杰
2023-03-14

我试图构建一个具有spark依赖关系的非常基本的scala脚本。但我不能用它做罐子。

    import Dependencies._

    lazy val root = (project in file(".")).
     settings(
               inThisBuild(List(
                                 organization := "com.example",
                                 scalaVersion := "2.12.1",
                                 version      := "0.1.0-SNAPSHOT"
                              )),
               name := "Hello",
               libraryDependencies +=  "org.apache.spark" %% "spark-core" % "1.6.0-SNAPSHOT",
               resolvers += Resolver.mavenLocal
                )
package example
import org.apache.spark.SparkContext
import org.apache.spark.SparkContext._
import org.apache.spark.SparkConf

object Hello  {
     def main(args: Array[String]) {
           val logFile = "/Users/dhruvsha/Applications/spark/README.md"                 
           val conf = new SparkConf().setAppName("Simple Application")
           val sc = new SparkContext(conf)
           val logData = sc.textFile(logFile, 2).cache()
           val numAs = logData.filter(line => line.contains("a")).count()
           val numBs = logData.filter(line => line.contains("b")).count()
           println(s"Lines with a: $numAs, Lines with b: $numBs")
           sc.stop()
         }
}

我的scala源代码在:

/exampleapp/main/scala/example/hello.scala

项目名为exampleapp。

共有1个答案

拓拔野
2023-03-14

build.sbt中的库依赖项行似乎错误

正确的应该是

libraryDependencies += "org.apache.spark" % "spark-core_2.10" % "1.6.0"
 类似资料: