当前位置: 首页 > 知识库问答 >
问题:

在Spark中运行任务时出错ExecutorLostFailure

裴金鑫
2023-03-14
spark.master mesos://uc1f-bioinfocloud-vamp-m-1:5050
spark.eventLog.enabled true
spark.driver.memory 6g
spark.storage.memoryFraction 0.7
spark.core.connection.ack.wait.timeout 800
spark.akka.frameSize 50
spark.rdd.compress true

我正在尝试在一个大约14 GB数据的文件夹上运行Spark MLlib朴素贝叶斯算法。(当我在一个6 GB的文件夹上运行任务时没有任何问题)我正在从google storage读取这个文件夹作为RDD并给出32作为分区参数(我也尝试增加分区)。然后利用TF生成特征向量并在此基础上进行预测。但当我试图在这个文件夹上运行它时,它每次都抛给我ExecutorLostFailure。我尝试了不同的配置,但没有任何帮助。可能是我错过了一些很基本的东西,但却无法弄明白。任何帮助或建议都将是非常有价值的。

日志为:

   15/07/21 01:18:20 ERROR TaskSetManager: Task 3 in stage 2.0 failed 4 times; aborting job    
15/07/21 01:18:20 INFO TaskSchedulerImpl: Cancelling stage 2    
15/07/21 01:18:20 INFO TaskSchedulerImpl: Stage 2 was cancelled    
15/07/21 01:18:20 INFO DAGScheduler: ResultStage 2 (collect at /opt/work/V2ProcessRecords.py:213) failed in 28.966 s    
15/07/21 01:18:20 INFO DAGScheduler: Executor lost: 20150526-135628-3255597322-5050-1304-S8 (epoch 3)    
15/07/21 01:18:20 INFO BlockManagerMasterEndpoint: Trying to remove executor 20150526-135628-3255597322-5050-1304-S8 from BlockManagerMaster.    
15/07/21 01:18:20 INFO DAGScheduler: Job 2 failed: collect at /opt/work/V2ProcessRecords.py:213, took 29.013646 s    
Traceback (most recent call last):    
  File "/opt/work/V2ProcessRecords.py", line 213, in <module>
    secondPassRDD = firstPassRDD.map(lambda ( name, title,  idval, pmcId, pubDate, article, tags , author, ifSigmaCust, wclass): ( str(name), title,  idval, pmcId, pubDate, article, tags , author, ifSigmaCust , "Yes" if ("PMC" + pmcId) in rddNIHGrant else ("No") , wclass)).collect()    
  File "/usr/local/spark/python/lib/pyspark.zip/pyspark/rdd.py", line 745, in collect    
  File "/usr/local/spark/python/lib/py4j-0.8.2.1-src.zip/py4j/java_gateway.py", line 538, in __call__    
  File "/usr/local/spark/python/lib/py4j-0.8.2.1-src.zip/py4j/protocol.py", line 300, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.collectAndServe.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 3 in stage 2.0 failed 4 times, most recent failure: Lost task 3.3 in stage 2.0 (TID 12, vamp-m-2.c.quantum-854.internal): ExecutorLostFailure (executor 20150526-135628-3255597322-5050-1304-S8 lost)    
Driver stacktrace:    
        at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1266)
        at       org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1257)
        at    org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1256)
        at     scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
        at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
        at     org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1256)
        at    org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:730)
        at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:730)
        at scala.Option.foreach(Option.scala:236)
        at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:730)
        at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1450)
        at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1411)
        at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)

15/07/21 01:18:20 INFO BlockManagerMaster: Removed 20150526-135628-3255597322-5050-1304-S8 successfully in removeExecutor
15/07/21 01:18:20 INFO DAGScheduler: Host added was in lost list earlier:vamp-m-2.c.quantum-854.internal
Jul 21, 2015 1:01:15 AM INFO: parquet.hadoop.ParquetFileReader: Initiating action with parallelism: 5
15/07/21 01:18:20 INFO SparkContext: Invoking stop() from shutdown hook



{"Event":"SparkListenerTaskStart","Stage ID":2,"Stage Attempt ID":0,"Task Info":{"Task ID":11,"Index":6,"Attempt":2,"Launch Time":1437616381852,"Executor ID":"20150526-135628-3255597322-5050-1304-S8","Host":"uc1f-bioinfocloud-vamp-m-2.c.quantum-device-854.internal","Locality":"PROCESS_LOCAL","Speculative":false,"Getting Result Time":0,"Finish Time":0,"Failed":false,"Accumulables":[]}}

共有1个答案

濮阳宜
2023-03-14

根据我的理解,导致ExecutorLostFailure的最常见原因是Executor中的OOM。

为了解决OOM问题,需要弄清楚到底是什么导致了它。简单地增加默认并行度或增加执行器内存都不是一个策略性的解决方案。

如果您看看增加并行性所做的事情,它试图创建更多的执行器,以便每个执行器可以处理越来越少的数据。但是,如果您的数据是倾斜的,以至于发生数据分区(为了并行性)的键有更多的数据,那么仅仅增加并行性将不会有任何效果。

 类似资料: