当前位置: 首页 > 知识库问答 >
问题:

火花流作业不可恢复

花品
2023-03-14

我正在使用一个火花流作业,它使用带有初始RDD的mapAnd State。当重新启动应用程序并从检查点恢复时,它会失败,出错:

此RDD缺少SparkContext。它可能发生在以下情况:

  1. RDD转换和操作不是由驱动程序调用的,而是在其他转换内部调用的;例如,rdd1.map(x=

中描述了此行为https://issues.apache.org/jira/browse/SPARK-13758但它并没有真正描述如何解决它。我的RDD不是由流作业定义的,但我仍然需要它处于状态。

这是我的图表的一个示例:

class EventStreamingApplication {
  private val config: Config = ConfigFactory.load()
  private val sc: SparkContext = {
    val conf = new SparkConf()
      .setAppName(config.getString("streaming.appName"))
      .set("spark.cassandra.connection.host", config.getString("streaming.cassandra.host"))
    val sparkContext = new SparkContext(conf)
    System.setProperty("com.amazonaws.services.s3.enableV4", "true")
    sparkContext.hadoopConfiguration.set("com.amazonaws.services.s3.enableV4", "true")
    sparkContext
  }

  def run(): Unit = {
    // streaming.eventCheckpointDir is an S3 Bucket
    val ssc: StreamingContext = StreamingContext.getOrCreate(config.getString("streaming.eventCheckpointDir"), createStreamingContext)
    ssc.start()
    ssc.awaitTermination()
  }

  def receiver(ssc: StreamingContext): DStream[Event] = {
    RabbitMQUtils.createStream(ssc, Map(
      "hosts" -> config.getString("streaming.rabbitmq.host"),
      "virtualHost" -> config.getString("streaming.rabbitmq.virtualHost"),
      "userName" -> config.getString("streaming.rabbitmq.user"),
      "password" -> config.getString("streaming.rabbitmq.password"),
      "exchangeName" -> config.getString("streaming.rabbitmq.eventExchange"),
      "exchangeType" -> config.getString("streaming.rabbitmq.eventExchangeType"),
      "queueName" -> config.getString("streaming.rabbitmq.eventQueue")
    )).flatMap(EventParser.apply)
  }

  def setupStreams(ssc: StreamingContext): Unit = {
    val events = receiver(ssc)
    ExampleJob(events, sc)
  }

  private def createStreamingContext(): StreamingContext = {
    val ssc = new StreamingContext(sc, Seconds(config.getInt("streaming.batchSeconds")))
    setupStreams(ssc)
    ssc.checkpoint(config.getString("streaming.eventCheckpointDir"))
    ssc
  }
}

case class Aggregation(value: Long) // Contains aggregation values

object ExampleJob {
  def apply(events: DStream[Event], sc: SparkContext): Unit = {
    val aggregations: RDD[(String, Aggregation)] = sc.cassandraTable('...', '...').map(...) // some domain class mapping
    val state = StateSpec
      .function((key, value, state) => {
        val oldValue = state.getOption().map(_.value).getOrElse(0)
        val newValue = oldValue + value.getOrElse(0)
        state.update(Aggregation(newValue))
        state.get
      })
      .initialState(aggregations)
      .numPartitions(1)
      .timeout(Seconds(86400))
    events
      .filter(...) // filter out unnecessary events
      .map(...) // domain class mapping to key, event dstream
      .groupByKey()
      .map(i => (i._1, i._2.size.toLong))
      .mapWithState(state)
      .stateSnapshots()
      .foreachRDD(rdd => {
        rdd.saveToCassandra(...)
      })
  }
}

抛出的堆栈跟踪是:

Exception in thread "main" org.apache.spark.SparkException: This RDD lacks a SparkContext. It could happen in the following cases: 
(1) RDD transformations and actions are NOT invoked by the driver, but inside of other transformations; for example, rdd1.map(x => rdd2.values.count() * x) is invalid because the values transformation and count action cannot be performed inside of the rdd1.map transformation. For more information, see SPARK-5063.
(2) When a Spark Streaming job recovers from checkpoint, this exception will be hit if a reference to an RDD not defined by the streaming job is used in DStream operations. For more information, See SPARK-13758.
  at org.apache.spark.rdd.RDD.org$apache$spark$rdd$RDD$$sc(RDD.scala:89)
  at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
  at org.apache.spark.rdd.PairRDDFunctions.partitionBy(PairRDDFunctions.scala:534)
  at org.apache.spark.streaming.rdd.MapWithStateRDD$.createFromPairRDD(MapWithStateRDD.scala:193)
  at org.apache.spark.streaming.dstream.InternalMapWithStateDStream.compute(MapWithStateDStream.scala:146)
  at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1$$anonfun$apply$7.apply(DStream.scala:341)
  at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1$$anonfun$apply$7.apply(DStream.scala:341)
  at scala.util.DynamicVariable.withValue(DynamicVariable.scala:58)
  at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1.apply(DStream.scala:340)
  at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1.apply(DStream.scala:340)
  at org.apache.spark.streaming.dstream.DStream.createRDDWithLocalProperties(DStream.scala:415)
  at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1.apply(DStream.scala:335)
  at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1.apply(DStream.scala:333)
  at scala.Option.orElse(Option.scala:289)
  at org.apache.spark.streaming.dstream.DStream.getOrCompute(DStream.scala:330)
  at org.apache.spark.streaming.dstream.InternalMapWithStateDStream.compute(MapWithStateDStream.scala:134)
  at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1$$anonfun$apply$7.apply(DStream.scala:341)
  at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1$$anonfun$apply$7.apply(DStream.scala:341)
  at scala.util.DynamicVariable.withValue(DynamicVariable.scala:58)
  at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1.apply(DStream.scala:340)
  at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1.apply(DStream.scala:340)
  at org.apache.spark.streaming.dstream.DStream.createRDDWithLocalProperties(DStream.scala:415)
  at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1.apply(DStream.scala:335)
  at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1.apply(DStream.scala:333)
  at scala.Option.orElse(Option.scala:289)
  ...
  <991 lines omitted>
  ...
  at org.apache.spark.streaming.dstream.DStream.getOrCompute(DStream.scala:330)
  at org.apache.spark.streaming.dstream.InternalMapWithStateDStream.compute(MapWithStateDStream.scala:134)
  at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1$$anonfun$apply$7.apply(DStream.scala:341)
  at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1$$anonfun$apply$7.apply(DStream.scala:341)
  at scala.util.DynamicVariable.withValue(DynamicVariable.scala:58)
  at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1.apply(DStream.scala:340)
  at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1.apply(DStream.scala:340)
  at org.apache.spark.streaming.dstream.DStream.createRDDWithLocalProperties(DStream.scala:415)
  at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1.apply(DStream.scala:335)
  at ... run in separate thread using org.apache.spark.util.ThreadUtils ... ()
  at org.apache.spark.streaming.StreamingContext.liftedTree1$1(StreamingContext.scala:577)
  at org.apache.spark.streaming.StreamingContext.start(StreamingContext.scala:571)
  at com.example.spark.EventStreamingApplication.run(EventStreamingApplication.scala:31)
  at com.example.spark.EventStreamingApplication$.main(EventStreamingApplication.scala:63)
  at com.example.spark.EventStreamingApplication.main(EventStreamingApplication.scala)
  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
  at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.lang.reflect.Method.invoke(Method.java:497)
  at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:743)
  at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:187)
  at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:212)
  at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:126)
  at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

共有1个答案

盛琪
2023-03-14

似乎火花试图恢复时,没有选择正确的最新检查点文件。正因为如此,不正确的RDD被引用。

看起来spark版本2.1。1受到影响,因为它不在固定版本列表中。

请参阅下面的链接,以获取尚未指定修复版本的apache文档。

https://issues.apache.org/jira/browse/SPARK-19280

在我看来,您可以尝试探索自动/手动解决方案,您可以在重新启动火花作业时指定最新的检查点文件。

我知道这没有多大帮助,但我认为最好向您解释这个问题的根本原因和当前解决问题的发展,以及我对可能的解决方案的看法。

 类似资料:
  • 每次使用cassandra connector在spark中运行scala程序时都会出现此错误 这是我的程序

  • 我正在构建作用于多个流的Apache Spark应用程序。 我确实阅读了文档中的性能调优部分:http://spark.apache.org/docs/latest/streaming-programming-guide.html#performan-tuning 我没有得到的是: 1)流媒体接收器是位于多个工作节点上,还是位于驱动程序机器上? 2)如果接收数据的节点之一失败(断电/重新启动)会发

  • 我试图从聚合原理的角度来理解火花流。Spark DF 基于迷你批次,计算在特定时间窗口内出现的迷你批次上完成。 假设我们有数据作为- 然后首先对Window_period_1进行计算,然后对Window_period_2进行计算。如果我需要将新的传入数据与历史数据一起使用,比如说Window_priod_new与Window_pperid_1和Window_perid_2的数据之间的分组函数,我该

  • 我在火花流应用程序中从kafka读取数据并执行两个操作 将dstream插入hbase表A 更新另一个hbase表B 我想确保对于dstream中的每个rdd,插入hbase表A将在对hbase表B进行更新操作之前发生(每个rdd依次发生上述两个动作) 如何在火花流应用中实现这一点

  • 如果spark streaming在10秒的批处理间隔中获得50行消息,并且在40.5行消息之后,这10秒就结束了,剩下的时间落入另一个10秒的间隔中,前40.5行的文本是一个RDD被首先处理,在我的用例中,前40行是有意义的,但是下一个。5行没有意义,第二个RDD首先也是这样。5行,我的问题是否有效?。请提供建议如何处理这个问题?。 谢谢比尔。

  • 我使用spark-core 2.0.1版和Scala2.11。我有一个简单的代码来读取一个包含\escapes的csv文件。 null 有人面临同样的问题吗?我是不是漏掉了什么? 谢谢