当前位置: 首页 > 知识库问答 >
问题:

Spark Kubernetes错误:Pod已存在

梁丘烨
2023-03-14

当我尝试通过spark-submit提交我的应用程序时,我得到以下错误:请帮助我解决问题

错误:

pod name: newdriver
         namespace: default
         labels: spark-app-selector -> spark-a17960c79886423383797eaa77f9f706, spark-role -> driver
         pod uid: 0afa41ae-4e4c-47be-86a3-1ef77739506c
         creation time: 2020-05-06T14:11:29Z
         service account name: spark
         volumes: spark-local-dir-1, spark-conf-volume, spark-token-tks2g
         node name: minikube
         start time: 2020-05-06T14:11:29Z
         phase: Running
         container status:
                 container name: spark-kubernetes-driver
                 container image: spark-py:v3.0
                 container state: running
                 container started at: 2020-05-06T14:11:31Z
Exception in thread "main" io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: POST at: https://172.17.0.2:8443/api/v1/namespaces/default/pods. Message: pods "newtrydriver" already exists. Received status: Status(apiVersion=v1, code=409, details=StatusDetails(causes=[], group=null, kind=pods, name=newtrydriver, retryAfterSeconds=null, uid=null, additionalProperties={}), kind=Status, message=pods "newtrydriver" already exists, metadata=ListMeta(_continue=null, remainingItemCount=null, resourceVersion=null, selfLink=null, additionalProperties={}), reason=AlreadyExists, status=Failure, additionalProperties={}).
        at io.fabric8.kubernetes.client.dsl.base.OperationSupport.requestFailure(OperationSupport.java:510)
        at io.fabric8.kubernetes.client.dsl.base.OperationSupport.assertResponseCode(OperationSupport.java:449)
        at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:413)
        at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:372)
        at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleCreate(OperationSupport.java:241)
        at io.fabric8.kubernetes.client.dsl.base.BaseOperation.handleCreate(BaseOperation.java:819)
        at io.fabric8.kubernetes.client.dsl.base.BaseOperation.create(BaseOperation.java:334)
        at io.fabric8.kubernetes.client.dsl.base.BaseOperation.create(BaseOperation.java:330)
        at org.apache.spark.deploy.k8s.submit.Client.$anonfun$run$2(KubernetesClientApplication.scala:130)
        at org.apache.spark.deploy.k8s.submit.Client.$anonfun$run$2$adapted(KubernetesClientApplication.scala:129)
        at org.apache.spark.util.Utils$.tryWithResource(Utils.scala:2539)
        at org.apache.spark.deploy.k8s.submit.Client.run(KubernetesClientApplication.scala:129)
        at org.apache.spark.deploy.k8s.submit.KubernetesClientApplication.$anonfun$run$4(KubernetesClientApplication.scala:221)
        at org.apache.spark.deploy.k8s.submit.KubernetesClientApplication.$anonfun$run$4$adapted(KubernetesClientApplication.scala:215)
        at org.apache.spark.util.Utils$.tryWithResource(Utils.scala:2539)
        at org.apache.spark.deploy.k8s.submit.KubernetesClientApplication.run(KubernetesClientApplication.scala:215)
        at org.apache.spark.deploy.k8s.submit.KubernetesClientApplication.start(KubernetesClientApplication.scala:188)
        at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:928)
        at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:180)
        at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203)
        at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90)
        at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1007)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1016)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
20/05/06 14:11:34 INFO ShutdownHookManager: Shutdown hook called
20/05/06 14:11:34 INFO ShutdownHookManager: Deleting directory /tmp/spark-b7ea9c80-6040-460a-ba43-5c6e656d3039

./spark-submit--master k8s://https://172.17.0.2:8443--deploy-mode cluster--conf spark.executor.instances=3--conf spark.kubernetes.container.image=spark-py:v3.0--conf spark.kubernetes.authenticate.driver.ServiceAccountName=spark-name newtry--conf spark.kubernetes.driver.pod.name=newdriver本地:

共有1个答案

赖明煦
2023-03-14

请通过运行kubectl get pods--namespace default--show-all来检查命名空间default中是否存在名为newdriver的Pod。您可能已经有了前面运行时留下的具有此名称的terminatedcompletedSpark驱动程序Pod。如果是,请运行kubectl delete pod newdriver--namespace default删除它,然后再次尝试启动新的Spark作业。

 类似资料: