我想在Apache Spark中启用单个集群,我安装了java和scala。我下载了Apache Hadoop
2.6的火花并解压缩。我试图转动火花壳,但抛出一个错误,此外,我无权访问shell中的sc。我从源代码编译,但存在相同的错误。我究竟做错了什么?
Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/___/ .__/\_,_/_/ /_/\_\ version 1.3.1
/_/
Using Scala version 2.10.4 (Java HotSpot(TM) 64-Bit Server VM, Java 1.7.0_79)
Type in expressions to have them evaluated.
Type :help for more information.
java.net.BindException: Failed to bind to: ADMINISTRATOR.home/192.168.1.5:0: Service 'sparkDriver' failed after 16 retries!
at org.jboss.netty.bootstrap.ServerBootstrap.bind(ServerBootstrap.java:272)
at akka.remote.transport.netty.NettyTransport$$anonfun$listen$1.apply(NettyTransport.scala:393)
at akka.remote.transport.netty.NettyTransport$$anonfun$listen$1.apply(NettyTransport.scala:389)
at scala.util.Success$$anonfun$map$1.apply(Try.scala:206)
at scala.util.Try$.apply(Try.scala:161)
at scala.util.Success.map(Try.scala:206)
at scala.concurrent.Future$$anonfun$map$1.apply(Future.scala:235)
at scala.concurrent.Future$$anonfun$map$1.apply(Future.scala:235)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32)
at akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.processBatch$1(BatchingExecutor.scala:67)
at akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:82)
at akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:59)
at akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:59)
at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
at akka.dispatch.BatchingExecutor$Batch.run(BatchingExecutor.scala:58)
at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:41)
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:393)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
java.lang.NullPointerException
at org.apache.spark.sql.SQLContext.<init>(SQLContext.scala:145)
at org.apache.spark.sql.hive.HiveContext.<init>(HiveContext.scala:49)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(Unknown Source)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(Unknown Source)
at java.lang.reflect.Constructor.newInstance(Unknown Source)
at org.apache.spark.repl.SparkILoop.createSQLContext(SparkILoop.scala:1027)
at $iwC$$iwC.<init>(<console>:9)
at $iwC.<init>(<console>:18)
at <init>(<console>:20)
at .<init>(<console>:24)
at .<clinit>(<console>)
at .<init>(<console>:7)
at .<clinit>(<console>)
at $print(<console>)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065)
at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1338)
at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
at org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:856)
at org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:901)
at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:813)
at org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply(SparkILoopInit.scala:130)
at org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply(SparkILoopInit.scala:122)
at org.apache.spark.repl.SparkIMain.beQuietDuring(SparkIMain.scala:324)
at org.apache.spark.repl.SparkILoopInit$class.initializeSpark(SparkILoopInit.scala:122)
at org.apache.spark.repl.SparkILoop.initializeSpark(SparkILoop.scala:64)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1$$anonfun$apply$mcZ$sp$5.apply$mcV$sp(SparkILoop.scala:973)
at org.apache.spark.repl.SparkILoopInit$class.runThunks(SparkILoopInit.scala:157)
at org.apache.spark.repl.SparkILoop.runThunks(SparkILoop.scala:64)
at org.apache.spark.repl.SparkILoopInit$class.postInitialization(SparkILoopInit.scala:106)
at org.apache.spark.repl.SparkILoop.postInitialization(SparkILoop.scala:64)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply$mcZ$sp(SparkILoop.scala:990)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:944)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:944)
at scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135)
at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$process(SparkILoop.scala:944)
at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1058)
at org.apache.spark.repl.Main$.main(Main.scala:31)
at org.apache.spark.repl.Main.main(Main.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:569)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:166)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:189)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:110)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
<console>:10: error: not found: value sqlContext
import sqlContext.implicits._
^
<console>:10: error: not found: value sqlContext
import sqlContext.sql
^
scala>
我刚刚开始学习Spark,希望在本地模式下运行Spark。我遇到了像你这样的问题。问题:
java.net.BindException:无法绑定到:/124.232.132.94:0:16次重试后,服务“ sparkDriver”失败!
因为我只想在本地模式下运行Spark,所以找到了解决此问题的解决方案。解决方案:编辑文件spark- env.sh
(您可以在中找到它$SPARK_HOME/conf/
)并将其添加到文件中:
export SPARK_MASTER_IP=127.0.0.1
export SPARK_LOCAL_IP=127.0.0.1
之后,我的Spark在本地模式下可以正常工作。希望对您有所帮助!:)
我在清单文件中为我的一个活动定义了一个意图过滤器。当我试图使用以下命令从adb外壳启动此活动时: $adb shell am start 我得到以下错误: 开始:Intent{act=android.Intent.action.VIEW dat=http://www.example.com/gizmospkg=com。实例Android} 错误:活动未启动,无法解析Intent{act=andro
问题内容: 我刚刚尝试使用Homebrew(在Mac OS X 10.6上)安装MySQL,但是遇到了第一个障碍。尝试手动启动服务器(mysql.server start)时,出现以下错误: 不幸的是,我不确定要检查哪些错误日志或配置文件,因为我以前从未以这种方式安装过MySQL。 问题答案: 通过自制软件安装时,我遇到了同样的问题。确保运行以下命令(安装过程中列出了这些命令,但很容易错过):
我在模拟器上运行一个Android应用程序。它一直工作到昨天和今天我更新了Android Studio。我得到以下错误。如何解决? 模拟器:警告:将内存大小增加到1GB模拟器:错误: x86仿真目前需要硬件加速!请确保英特尔HAXM已正确安装并可用。CPU加速状态:HAXM必须更新(版本1.1.1
我在启动spring boot应用程序时遇到以下错误。这是我的第一个spring boot项目。因此,我不确定错误以及如何修复它。 申请启动失败 描述: 配置为侦听端口8080的Tomcat连接器无法启动。端口可能已在使用中,或者连接器可能配置错误。 行动: 验证连接器的配置,识别并停止在端口8080上侦听的任何进程,或者将此应用程序配置为在另一个端口上侦听。
我已经从主网站安装了卡桑德拉。每次我尝试启动它时,我总是得到一个错误 Java HotSpot(TM)64位服务器VM警告:Info:OS::Commit_Memory(0x00007F85B2000000,33554432,0)失败;error=“无法分配内存”(errno=12) OS:distrib_id=Ubuntu distrib_release=14.04 distrib_codena
我试图在xampp控制面板v3.2.1上用xampp 1.8.3启动tomcat,但出现了以下错误: Tomcat启动/停止错误,返回代码:1确保您安装了JavaJDK或JRE,并且所需的端口是免费的查看“/xampp/tomcat/logs”文件夹了解更多信息 谁能帮帮我?我正在用windows 7 如果我的英语不完美,我很抱歉