当前位置: 首页 > 知识库问答 >
问题:

Spark作业提交-等待(TaskScheduleImpl:初始作业不被接受)

龙隐水
2023-03-14

在集群UI上-

工人(奴隶)-工人-20160712083825-172.31.17.189-59433活着

已使用2个中的1个核心

活动阶段

/root/wordcount.py处的reduceByKey:23

悬而未决阶段

stderr log page for driver-20160713130051-0025 

WARN scheduler.TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources

根据TaskSchedulerImpl:初始作业尚未接受任何资源;我分配的

~/spark-1.5.0/conf/spark-env.sh

火花环境变量

SPARK_WORKER_INSTANCES=1
SPARK_WORKER_MEMORY=1000m
SPARK_WORKER_CORES=2

在奴隶身上复制了那些

sudo /root/spark-ec2/copy-dir /root/spark/conf/spark-env.sh
from pyspark import SparkContext, SparkConf

logFile = "/user/root/In/a.txt"

conf = (SparkConf().set("num-executors", "1"))

sc = SparkContext(master = "spark://ec2-54-209-108-127.compute-1.amazonaws.com:7077", appName = "MyApp", conf = conf)
print("in here")
lines = sc.textFile(logFile)
print("text read")
c = lines.count()
print("lines counted")
Starting job: count at /root/wordcount.py:11
16/07/18 07:46:39 INFO scheduler.DAGScheduler: Got job 0 (count at /root/wordcount.py:11) with 2 output partitions
16/07/18 07:46:39 INFO scheduler.DAGScheduler: Final stage: ResultStage 0 (count at /root/wordcount.py:11)
16/07/18 07:46:39 INFO scheduler.DAGScheduler: Parents of final stage: List()
16/07/18 07:46:39 INFO scheduler.DAGScheduler: Missing parents: List()
16/07/18 07:46:39 INFO scheduler.DAGScheduler: Submitting ResultStage 0 (PythonRDD[2] at count at /root/wordcount.py:11), which has no missing parents
16/07/18 07:46:39 INFO storage.MemoryStore: Block broadcast_1 stored as values in memory (estimated size 5.6 KB, free 56.2 KB)
16/07/18 07:46:39 INFO storage.MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 3.4 KB, free 59.7 KB)
16/07/18 07:46:39 INFO storage.BlockManagerInfo: Added broadcast_1_piece0 in memory on 172.31.17.189:43684 (size: 3.4 KB, free: 511.5 MB)
16/07/18 07:46:39 INFO spark.SparkContext: Created broadcast 1 from broadcast at DAGScheduler.scala:1006
16/07/18 07:46:39 INFO scheduler.DAGScheduler: Submitting 2 missing tasks from ResultStage 0 (PythonRDD[2] at count at /root/wordcount.py:11)
16/07/18 07:46:39 INFO scheduler.TaskSchedulerImpl: Adding task set 0.0 with 2 tasks
16/07/18 07:46:54 WARN scheduler.TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources

Spark版本1.6.1 Ubuntu Amazon EC2

共有1个答案

钱承允
2023-03-14

我也有同样的问题。下面是我在发生这种情况时的评论。

1:17:46警告TaskSchedulerImpl:初始作业尚未接受任何资源;检查您的群集UI以确保工作人员已注册并且有足够的资源

我注意到,它只发生在从scala shell执行的第一个查询期间,在那里我运行一些从HDFS获取数据的东西。

URL: spark://spark1:7077
REST URL: spark://spark1:6066 (cluster mode)
Alive Workers: 4
Cores in use: 26 Total, 26 Used
Memory in use: 52.7 GB Total, 4.0 GB Used
Applications: 0 Running, 0 Completed
Drivers: 0 Running, 0 Completed 
Status: ALIVE
URL: spark://spark1:7077
REST URL: spark://spark1:6066 (cluster mode)
Alive Workers: 4
Cores in use: 26 Total, 26 Used
Memory in use: 52.7 GB Total, 4.0 GB Used
Applications: 1 Running, 0 Completed
Drivers: 0 Running, 0 Completed
Status: ALIVE
 类似资料:
  • im关注亚马逊文档,向emr集群提交spark作业https://aws.amazon.com/premiumsupport/knowledge-center/emr-submit-spark-job-remote-cluster/ 在按照说明进行操作后,使用frecuent进行故障排除,它由于未解析的地址与消息类似而失败。 错误火花。SparkContext:初始化SparkContext时出错

  • 我们正在部署一个新的Flink流处理作业,它的状态(存储)需要使用历史数据进行初始化,并且在开始处理任何新的应用程序事件之前,该数据应该在状态存储中可用。我们不想显着修改Flink作业以同时加载历史数据。我们考虑编写另一个单独的Flink作业来处理历史数据,更新其状态存储并创建一个Savepoint并使用此Savepoint在主Flink作业中初始化状态。看起来状态处理器API仅适用于DataSe

  • 一、作业提交 1.1 spark-submit Spark 所有模式均使用 spark-submit 命令提交作业,其格式如下: ./bin/spark-submit \ --class <main-class> \ # 应用程序主入口类 --master <master-url> \ # 集群的 Master Url --deploy-mode <deplo

  • 问题内容: 但是有很多歧义和提供的一些答案…包括在jars / executor / driver配置或选项中复制jar引用。 How ClassPath is affected Driver Executor (for tasks running) Both not at all Separation character: comma, colon, semicolon If provided

  • 18:02:55,271错误UTILS:91-中止任务java.lang.nullpointerException在org.apache.spark.sql.catalyst.expressions.generatedClass$GeneratedIterator.agg_doAggregateWithKeys$(未知源)在org.apache.spark.sql.catalyst.express

  • 每个人都试着用https://console.developers.google.com/project/_/mc/template/hadoop? Spark对我来说安装正确,我可以SSH进入hadoop worker或master,Spark安装在/home/hadoop/Spark install/ 我可以使用spark python shell在云存储中读取文件 lines=sc.text