当前位置: 首页 > 知识库问答 >
问题:

在Kubernetes上的Spark独立--在连续的主程序和驱动程序失败后,应用程序完成

公羊凌
2023-03-14

尝试使用ZooKeeper和SparkDriver弹性实现SparkMaster的高可用性,使用GlusterFS中的元数据检查点。

    null
18/06/11 10:23:16 INFO ZooKeeperLeaderElectionAgent: We have gained leadership
18/06/11 10:23:16 INFO Master: I have been elected leader! New state: RECOVERING
18/06/11 10:23:16 INFO Master: Trying to recover app: app-20180611102123-0001
18/06/11 10:23:16 INFO Master: Trying to recover worker: worker-20180611101834-10.1.53.142-36203
18/06/11 10:23:16 INFO Master: Trying to recover worker: worker-20180611102123-10.1.170.85-39447
18/06/11 10:23:16 INFO Master: Trying to recover worker: worker-20180611101834-10.1.185.87-38235
18/06/11 10:23:16 INFO TransportClientFactory: Successfully created connection to /10.1.53.142:36203 after 7 ms (0 ms spent in bootstraps)
18/06/11 10:23:16 INFO TransportClientFactory: Successfully created connection to /10.1.185.87:38235 after 3 ms (0 ms spent in bootstraps)
18/06/11 10:23:16 INFO TransportClientFactory: Successfully created connection to /10.1.53.142:38994 after 12 ms (0 ms spent in bootstraps)
18/06/11 10:23:16 INFO TransportClientFactory: Successfully created connection to /10.1.170.85:39447 after 7 ms (0 ms spent in bootstraps)
18/06/11 10:23:16 INFO Master: Application has been re-registered: app-20180611102123-0001
18/06/11 10:23:16 INFO Master: Worker has been re-registered: worker-20180611102123-10.1.170.85-39447
18/06/11 10:23:16 INFO Master: Worker has been re-registered: worker-20180611101834-10.1.53.142-36203
18/06/11 10:23:16 INFO Master: Worker has been re-registered: worker-20180611101834-10.1.185.87-38235
18/06/11 10:23:16 INFO Master: Recovery complete - resuming operations!
18/06/11 10:24:37 INFO Master: Received unregister request from application app-20180611102123-0001
18/06/11 10:24:37 INFO Master: Removing app app-20180611102123-0001
18/06/11 10:24:37 INFO Master: 10.1.53.142:38994 got disassociated, removing it.
18/06/11 10:24:37 INFO Master: 10.1.53.142:38994 got disassociated, removing it.
18/06/11 10:24:37 WARN Master: Got status update for unknown executor app-20180611102123-0001/0
18/06/11 10:24:37 WARN Master: Got status update for unknown executor app-20180611102123-0001/1
18/06/11 10:24:38 INFO Master: 10.1.53.142:36203 got disassociated, removing it.
18/06/11 10:24:38 INFO Master: Removing worker worker-20180611101834-10.1.53.142-36203 on 10.1.53.142:36203
18/06/11 10:24:38 INFO Master: Re-launching driver-20180611102017-0000
18/06/11 10:24:38 INFO Master: Launching driver driver-20180611102017-0000 on worker worker-20180611101834-10.1.185.87-38235
18/06/11 10:24:38 INFO Master: 10.1.53.142:59142 got disassociated, removing it.
18/06/11 10:24:38 INFO Master: 10.1.53.142:36203 got disassociated, removing it.
18/06/11 10:24:38 INFO Master: 10.1.53.142:36203 got disassociated, removing it.
18/06/11 10:24:43 INFO Master: Registering worker 10.1.53.143:35156 with 8 cores, 30.3 GB RAM

驾驶员保持在停车状态。驱动程序错误日志-

log4j:WARN No appenders could be found for logger (org.apache.hadoop.metrics2.lib.MutableMetricsFactory).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
18/06/11 19:32:14 INFO SecurityManager: Changing view acls to: root
18/06/11 19:32:14 INFO SecurityManager: Changing modify acls to: root
18/06/11 19:32:14 INFO SecurityManager: Changing view acls groups to: 
18/06/11 19:32:14 INFO SecurityManager: Changing modify acls groups to: 
18/06/11 19:32:14 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users  with view permissions: Set(root); groups with view permissions: Set(); users  with modify permissions: Set(root); groups with modify permissions: Set()
18/06/11 19:32:15 INFO Utils: Successfully started service 'Driver' on port 40594.
18/06/11 19:32:15 INFO WorkerWatcher: Connecting to worker spark://Worker@10.1.185.87:38235
18/06/11 19:32:15 INFO TransportClientFactory: Successfully created connection to /10.1.185.87:38235 after 44 ms (0 ms spent in bootstraps)
18/06/11 19:32:15 INFO WorkerWatcher: Successfully connected to spark://Worker@10.1.185.87:38235
18/06/11 19:32:15 INFO CheckpointReader: Checkpoint files found: file:/ckp/checkpoint-1528712675000,file:/ckp/checkpoint-1528712675000.bk,file:/ckp/checkpoint-1528712670000,file:/ckp/checkpoint-1528712670000.bk,file:/ckp/checkpoint-1528712665000,file:/ckp/checkpoint-1528712665000.bk,file:/ckp/checkpoint-1528712660000,file:/ckp/checkpoint-1528712660000.bk,file:/ckp/checkpoint-1528712655000,file:/ckp/checkpoint-1528712655000.bk
18/06/11 19:32:15 INFO CheckpointReader: Attempting to load checkpoint from file file:/ckp/checkpoint-1528712675000
18/06/11 19:32:15 INFO Checkpoint: Checkpoint for time 1528712675000 ms validated
18/06/11 19:32:15 INFO CheckpointReader: Checkpoint successfully loaded from file file:/ckp/checkpoint-1528712675000
18/06/11 19:32:15 INFO CheckpointReader: Checkpoint was generated at time 1528712675000 ms
18/06/11 19:32:15 INFO SparkContext: Running Spark version 2.2.0
18/06/11 19:32:15 INFO SparkContext: Submitted application: SparkStreamingWithCheckPointAndZK
18/06/11 19:32:15 INFO SecurityManager: Changing view acls to: root
18/06/11 19:32:15 INFO SecurityManager: Changing modify acls to: root
18/06/11 19:32:15 INFO SecurityManager: Changing view acls groups to: 
18/06/11 19:32:15 INFO SecurityManager: Changing modify acls groups to: 
18/06/11 19:32:15 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users  with view permissions: Set(root); groups with view permissions: Set(); users  with modify permissions: Set(root); groups with modify permissions: Set()
18/06/11 19:32:15 INFO Utils: Successfully started service 'sparkDriver' on port 46544.
18/06/11 19:32:15 INFO SparkEnv: Registering MapOutputTracker
18/06/11 19:32:15 INFO SparkEnv: Registering BlockManagerMaster
18/06/11 19:32:15 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
18/06/11 19:32:15 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
18/06/11 19:32:16 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-623c4b9e-8045-4a19-a746-96a3b23c1184
18/06/11 19:32:16 INFO MemoryStore: MemoryStore started with capacity 366.3 MB
18/06/11 19:32:16 INFO SparkEnv: Registering OutputCommitCoordinator
18/06/11 19:32:16 INFO Utils: Successfully started service 'SparkUI' on port 4040.
18/06/11 19:32:16 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://10.1.185.87:4040
18/06/11 19:32:16 INFO SparkContext: Added JAR file:///opt/spark/jars/spark-0.0.1-SNAPSHOT.jar at spark://10.1.185.87:46544/jars/spark-0.0.1-SNAPSHOT.jar with timestamp 1528745536460
18/06/11 19:32:16 INFO StandaloneAppClient$ClientEndpoint: Connecting to master spark://10.1.170.81:7077...
18/06/11 19:32:36 INFO StandaloneAppClient$ClientEndpoint: Connecting to master spark://10.1.170.81:7077...
18/06/11 19:32:56 INFO StandaloneAppClient$ClientEndpoint: Connecting to master spark://10.1.170.81:7077...
18/06/11 19:33:16 ERROR StandaloneSchedulerBackend: Application has been killed. Reason: All masters are unresponsive! Giving up.
18/06/11 19:33:16 WARN StandaloneSchedulerBackend: Application ID is not initialized yet.
18/06/11 19:33:16 INFO SparkUI: Stopped Spark web UI at http://10.1.185.87:4040
18/06/11 19:33:16 INFO StandaloneSchedulerBackend: Shutting down all executors
18/06/11 19:33:16 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 46323.
18/06/11 19:33:16 INFO NettyBlockTransferService: Server created on 10.1.185.87:46323
18/06/11 19:33:16 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
18/06/11 19:33:16 INFO CoarseGrainedSchedulerBackend$DriverEndpoint: Asking each executor to shut down
18/06/11 19:33:16 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, 10.1.185.87, 46323, None)
18/06/11 19:33:16 WARN StandaloneAppClient$ClientEndpoint: Drop UnregisterApplication(null) because has not yet connected to master
18/06/11 19:33:16 INFO BlockManagerMasterEndpoint: Registering block manager 10.1.185.87:46323 with 366.3 MB RAM, BlockManagerId(driver, 10.1.185.87, 46323, None)
18/06/11 19:33:16 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 10.1.185.87, 46323, None)
18/06/11 19:33:16 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, 10.1.185.87, 46323, None)
18/06/11 19:33:16 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
18/06/11 19:33:16 INFO MemoryStore: MemoryStore cleared
18/06/11 19:33:16 INFO BlockManager: BlockManager stopped
18/06/11 19:33:16 INFO BlockManagerMaster: BlockManagerMaster stopped
18/06/11 19:33:16 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
18/06/11 19:33:16 ERROR SparkContext: Error initializing SparkContext.
java.lang.IllegalArgumentException: requirement failed: Can only call getServletHandlers on a running MetricsSystem
at scala.Predef$.require(Predef.scala:224)
at org.apache.spark.metrics.MetricsSystem.getServletHandlers(MetricsSystem.scala:91)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:524)
at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2509)
at org.apache.spark.streaming.StreamingContext.<init>(StreamingContext.scala:141)
at apache.spark.streaming.StreamingContext$$anonfun$getOrCreate$1.apply(StreamingContext.scala:829)
at org.apache.spark.streaming.StreamingContext$$anonfun$getOrCreate$1.apply(StreamingContext.scala:829)
at scala.Option.map(Option.scala:146)
at org.apache.spark.streaming.StreamingContext$.getOrCreate(StreamingContext.scala:829)
at org.apache.spark.streaming.api.java.JavaStreamingContext$.getOrCreate(JavaStreamingContext.scala:626)
at org.apache.spark.streaming.api.java.JavaStreamingContext.getOrCreate(JavaStreamingContext.scala)
at org.merlin.spark.SparkKafkaStreamingWithGluster.main(SparkKafkaStreamingWithGluster.java:42)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.worker.DriverWrapper$.main(DriverWrapper.scala:58)
at org.apache.spark.deploy.worker.DriverWrapper.main(DriverWrapper.scala)
18/06/11 19:33:16 INFO SparkContext: SparkContext already stopped.

Exception in thread "main" java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at scala.Predef$.require(Predef.scala:224)
at org.apache.spark.metrics.MetricsSystem.getServletHandlers(MetricsSystem.scala:91)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:524)
at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2509)
at org.apache.spark.streaming.StreamingContext.<init>(StreamingContext.scala:141)
at org.apache.spark.streaming.StreamingContext$$anonfun$getOrCreate$1.apply(StreamingContext.scala:829)
at org.apache.spark.streaming.StreamingContext$$anonfun$getOrCreate$1.apply(StreamingContext.scala:829)
at scala.Option.map(Option.scala:146)
at org.apache.spark.streaming.StreamingContext$.getOrCreate(StreamingContext.scala:829)
at org.apache.spark.streaming.api.java.JavaStreamingContext$.getOrCreate(JavaStreamingContext.scala:626)
at org.apache.spark.streaming.api.java.JavaStreamingContext.getOrCreate(JavaStreamingContext.scala)
at org.merlin.spark.SparkKafkaStreamingWithGluster.main(SparkKafkaStreamingWithGluster.java:42)
... 6 more

我是否为Spark选择了正确的资源控制器,即Statefulsets of kubernetes?我对这个环境是新的,任何帮助都是非常值得赞赏的。

共有1个答案

戚飞
2023-03-14

似乎您的驱动程序无法找到主节点。这是日志

18/06/11 19:33:16 ERROR StandaloneSchedulerBackend: Application has been killed. Reason: All masters are unresponsive! Giving up.

尝试从您的客户端计算机远程登录ip和端口。

 类似资料:
  • 我是否正确理解了客户端模式的文档? 客户端模式与驱动程序在应用程序主程序中运行的集群模式相反? 在客户端模式下,驱动程序和应用程序主程序是独立的进程,因此+必须小于计算机的内存? 在客户端模式下,驱动程序内存不包括在应用程序主内存设置中吗?

  • 我有一个带有Spark2.3.1的DStream流应用程序。 其中我正在从Kafka读取数据并写入Kerberized HDFS,但在写入HDFS时,我的批处理随机开始失败,异常显示与kerberos相关的错误,但我的spark应用程序仍在运行,因此除非检查日志,否则我不知道批处理正在失败。 我的问题是,有没有什么方法让我可以限制连续批处理故障转移的数量?建议是否存在某些属性,在这些属性中,我们可

  • 现在假设我们想要使用 Spark API 写一个独立的应用程序。我们将通过使用 Scala(用 SBT),Java(用 Maven) 和 Python 写一个简单的应用程序来学习。 我们用 Scala 创建一个非常简单的 Spark 应用程序。如此简单,事实上它的名字叫 SimpleApp.scala: /* SimpleApp.scala */ import org.apache.spark.S

  • 我的Spark 2.3.3集群运行良好。我在“http://master-address:8080”上看到了GUI,其中有2个空闲的工作人员。 我有一个Scala应用程序,它创建上下文并启动作业。我不使用spark-submit,我以编程方式开始工作,这是许多答案与我的问题不同的地方。 在“my-app”中,我创建了一个新的SparkConf,代码如下(略有缩写): 司机跑到哪里去了?我如何找到它

  • 我有一个spark流媒体工作,它从Kafka读取数据并对其执行一些操作。我正在一个纱线集群Spark 1.4.1上运行该作业,该集群有两个节点,每个节点有16 GB RAM,每个节点有16芯。 我已将这些conf传递给spark提交作业: --主纱线簇--num executors 3--驱动器内存4g--executor内存2g--executor cores 3 作业返回此错误并在运行一段时间

  • 问题内容: 我正在考虑开发由两部分组成的桌面应用程序: 用户界面(例如Java应用) 后端Node.js服务器 2个部分通过插座连接。不要问为什么我知道这很奇怪。 我将希望能够为客户提供带有安装程序的应用程序。我不希望用户必须自己安装Node.js。 有没有一种方法可以独立安装Node.js服务器,即无需在系统上全局安装Node.js。 对于任何(Windows,Linux,Mac OS X …)