我已经开始学习java的mllib apache spark。我是跟着官网的spark 2.1.1文档走的。我在我的ubuntu 14.04 lts中安装了spark-2.1.1-bin-hadoop2.7。我正在试着运行这段代码。
public class JavaLogisticRegressionWithElasticNetExample {
public static void main(String[] args) {
SparkSession spark = SparkSession.builder().appName("JavaLogisticRegressionWithElasticNetExample") .master("local[*]").getOrCreate();
// $example on$
// Load training data
Dataset<Row> training = spark.read().format("libsvm")
.load("data/mllib/sample_libsvm_data.txt");
LogisticRegression lr = new LogisticRegression()
.setMaxIter(10)
.setRegParam(0.3)
.setElasticNetParam(0.8);
// Fit the model
LogisticRegressionModel lrModel = lr.fit(training);
// Print the coefficients and intercept for logistic regression
System.out.println("Coefficients: "
+ lrModel.coefficients() + " Intercept: " + lrModel.intercept());
// We can also use the multinomial family for binary classification
LogisticRegression mlr = new LogisticRegression()
.setMaxIter(10)
.setRegParam(0.3)
.setElasticNetParam(0.8)
.setFamily("multinomial");
// Fit the model
LogisticRegressionModel mlrModel = mlr.fit(training);
// Print the coefficients and intercepts for logistic regression with multinomial family
System.out.println("Multinomial coefficients: " + lrModel.coefficientMatrix()
+ "\nMultinomial intercepts: " + mlrModel.interceptVector());
// $example off$
spark.stop();
}
}
我已经在系统中安装了spark-2.1.1-bin-hadoop2.7。我有pom。xml文件是
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-sql_2.11</artifactId>
<version>2.1.1</version>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-mllib_2.10</artifactId>
<version>2.1.1</version>
</dependency>
<!-- https://mvnrepository.com/artifact/org.apache.spark/spark-mllib-local_2.10 -->
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-mllib-local_2.10</artifactId>
<version>2.1.1</version>
</dependency>
但是我得到了这个例外
2008年9月17日16:42:19信息SparkEnv:在线程“main”java.lang.NoSuchMethodError:scala.Predef$中注册OutputCommitCoordinator异常$scope()Lscala/xml/TopScope$;位于org.apache.spark.ui.jobs.AllJobsPage.(AllJobsPage.scala:39),位于org.aapache.spark.ui.jobs.JobsTab.(JobsTab.scala:38),位于.org.apache.spark.ui.SparkUI.initialize(SparkUI.scala:65),位于org.apache.sark.ui.SarkUI.)。(SparkContext.scala:452),位于org.apache.sark.SparkContext$.getOrCreate(spark Context.salca:2320),位于org.apache.sark.sql.SarkSession$Builder$$anonfun$6.apply(SparkSession.scala:868),位于oraca.Option.getOrElse(Option.scala:121),位于JavaLogisticRegressionWithElasticNetExample.main(JavaLogistic RegressionWhithElasticNetworkExample.java:12)17/09/08 16:42:19 INFO DiskBlockManager:关闭挂钩调用17/09/0816:42:19 INFO ShutdownHookManager:删除目录/tmp/spark-8460a189-3039-47ec-8d75-9e0ca8b4ee5d 17/09/2008 16:42:19 INFO-关闭挂钩管理器:删除目录/tmp/spark-8460a189-3039-47ec-8d75-9e0ca8b4ee5d/userFiles-9b6994eb-1376-47a3-929e-e415e1fdb0c0
当您在同一程序中使用不同版本的scala时,会发生这种错误。事实上,在你的依赖项中(在你的pom.xml
中),你有一些带有scala 2.10的库,而其他的库带有scala 2.11。
使用 spark-sql_2.10
而不是 spark-sql_2.11
,你就可以了(或者将 mllib 版本更改为 2.11)。