Spark on K8S(spark-on-kubernetes-operator)常见问题(一)

姜德泽
2023-12-01

Spark Demo过程中的常见问题(一)

executor在启动时总是拉不到镜像

kubelet在node上启动pod时,sparkApplication要有imagePullSecret的描述,这样才能连接到私仓获取,需要:

  1. 创建dockersecret

kubectl create secret docker-registry harborsecret --docker-server=harbor.demo.com.cn --docker-username=‘docker-admin’ --docker-password=‘pwd’ --docker-email=‘admin@demo.com’

  1. 指定applicationApplication的yaml中的imagePullSecret
imagePullSecrets: ["dockersecret"]

如何让Spark程序可以连接HDFS集群

等初步了解Spark基本原理后再详细写,暂时还不懂。

如何对hadoop的参数进行配置

官方推荐的配置方法有两种,简单粗暴的方式一种

先说官方的两种推荐方式:

  1. 创建一个hadoop的configMap,将所需要的配置参数包括hdfs-site/core-site等配置文件写进去,然后通过配置项hadoopConfigMap来传递给dirver和exector,官方描述如下:

Specifying Hadoop Configuration
There are two ways to add Hadoop configuration: setting individual Hadoop configuration properties using the optional field .spec.hadoopConf or mounting a special Kubernetes ConfigMap storing Hadoop configuration files (e.g. core-site.xml) using the optional field .spec.hadoopConfigMap. The operator automatically adds the prefix spark.hadoop. to the names of individual Hadoop configuration properties in .spec.hadoopConf. If .spec.hadoopConfigMap is used, additionally to mounting the ConfigMap into the driver and executors, the operator additionally sets the environment variable HADOOP_CONF_DIR to point to the mount path of the ConfigMap.
The following is an example showing the use of individual Hadoop configuration properties:

spec:
  hadoopConf:
    "fs.gs.project.id": spark
    "fs.gs.system.bucket": spark
    "google.cloud.auth.service.account.enable": true
    "google.cloud.auth.service.account.json.keyfile": /mnt/secrets/key.json

大意:如果通过hadoopConf配置,可以直接指定参数,比较多;如果通过configmap方式配置,operator在创建driver/executor pod的时候,会将configmap挂载给driver/executor,同时设置HADOOP_CONF_DIR到configMap挂载的路径下;
这里没说清,在下面又有一段:

Mounting a ConfigMap storing Hadoop Configuration Files
A SparkApplication can specify a Kubernetes ConfigMap storing Hadoop configuration files such as core-site.xml using the optional field .spec.hadoopConfigMap whose value is the name of the ConfigMap. The ConfigMap is assumed to be in the same namespace as that of the SparkApplication. The operator mounts the ConfigMap onto path /etc/hadoop/conf in both the driver and executors. Additionally, it also sets the environment variable HADOOP_CONF_DIR to point to /etc/hadoop/conf in the driver and executors.

大意:configMap必须和SparkApplication在同一个namespace下,operator会在driver/executor的pod上将configMap挂载到/etc/hadoop/conf目录,同时将HADOOP_CONF_DIR变量修改为/etc/hadoop/conf;
所以单看上面一段,可能还没整明白是啥意思就能配置了,其实就是在配置了hadoopConf的情况下,operator帮助我们做了两件事:

  • 将配置的configMap挂载到driver、executor的目录下,而且目录是: /etc/hadoop/conf
  • 修改环境变量HADOOP_CONF_DIR=/etc/hadoop/conf

如果不做hadoopConf的配置,原本要开发者自己来实现,开发者实现的方式也很简单:

  • 创建configMap(用–from-file之类的,参考创建configMap

  • 修改driver/executor的pod的yaml,将configMap挂载到pod的目录下(这里其实更自由一些,目录我们自己可以定)volume本身是支持很多类型资源的挂载的,configMap是其中之一,支持哪些类型可以参考:https://kubernetes.io/zh/docs/concepts/storage/volumes/

  • 修改pod的HADOOP_CONF_DIR=/etc/hadoop/conf的环境变量(这里也很自由,确保路径和文件放在哪儿一样就行)

除了官方推荐的这两种方式以外,还有更粗暴的办法,就是在制作spark-pi镜像的时候,直接将需要用到的配置文件打入镜像的固定目录,同时在spark-pi中用代码自己进行hadoop Configuration/sparkSession的初始化和创建,效果是一样的,而且能很好的解决kerberos的问题,这个也经过了验证;

唯一遇到的问题是,executor无法得到可用的资源,后续再讨论这个问题,不一定和这种简单粗暴的方式有关,可能是其他的问题。

2020-01-16更新,上面的问题似乎找到了解决办法,根因没有分析清楚。

之前出现的日志:
WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
该问题在spark-pi代码中去掉了一个自研的组件依赖后,不再出现,可以正常执行;代码中仅保留hadoop-client的相关依赖内容;自研组件中的依赖太多了,没有识别出到底是哪一个jar包放里面会导致问题。

PS:如果不想通过打镜像这么粗暴的方法,也可以考虑使用增加一个initContainers,通过initContainers先下载相关的配置文件、依赖jar包等,然后再执行spark-pi的container,这种方式比较优雅灵活一些,除了能解决配置的问题,还能解决其他运行前的预处理准备场景;

如何让executor/driver集中部署在不同的node上

这个官方文档里好像也没咋找到,所以直接看了spark-on-kubernetes-operator的源码,可以看到在SparkApplicaiton的yaml定义中包含了nodeSelector的key,所以尝试按如下方式配置,可以按照预期执行:

  driver:
    volumeMounts:
      - name: "test-volume"
        mountPath: "/tmp"
    nodeSelector:
      type: drivers
    cores: 1
    coreLimit: "1200m"
    memory: "512m"
    labels:
      version: 2.4.4
    serviceAccount: spark
  executor:
    volumeMounts:
      - name: "test-volume"
        mountPath: "/tmp"
    nodeSelector:
      type: executors
    cores: 1
    instances: 2
    memory: "512m"
    labels:
      version: 2.4.4

如何配置driver/executor的环境变量

一般来说在pod下面通过如下配置即可生效:

    spec:
      containers:
      - env:
        - name: JAVA_OPTS
          value: -Xms4096m -Xmx5120m
        envFrom:
        - configMapRef:
            name: environment-profile

不过在spark-operator的源码里看到env的配置只有在driver/executor里才有:

                envSecretKeyRefs:
                  additionalProperties:
                    properties:
                      key:
                        type: string
                      name:
                        type: string
                    required:
                    - key
                    - name
                    type: object
                  type: object
                envVars:
                  additionalProperties:
                    type: string
                  type: object

按照官方的说法也是建议使用secret,所以就用secret了,其他方式没有尝试,估计不行;

kubectl create secret generic spark-environment --from-literal=principalName=xxxxxxx

如何配置spark的运行参数

和hadoop一样,官方文档中也提到了如果进行sparkConf的配置,推荐的方式也是一样一样的,原文如下:

Specifying Spark Configuration
There are two ways to add Spark configuration: setting individual Spark configuration properties using the optional field .spec.sparkConf or mounting a special Kubernetes ConfigMap storing Spark configuration files (e.g. spark-defaults.conf, spark-env.sh, log4j.properties) using the optional field .spec.sparkConfigMap. If .spec.sparkConfigMap is used, additionally to mounting the ConfigMap into the driver and executors, the operator additionally sets the environment variable SPARK_CONF_DIR to point to the mount path of the ConfigMap.

spec:
  sparkConf:
    "spark.ui.port": "4045"
    "spark.eventLog.enabled": "true"
    "spark.eventLog.dir": "hdfs://hdfs-namenode-1:8020/spark/spark-events"

一样的套路,只不过换了一个配置项的key而已,实现过程我理解和hadoop的configMap是一样的;由于对Spark暂时还不是很了解,所以也没有其他的内容可以写了。

关于kerberos

完成环境基本的搭建后,如果在K8S集群中想将Spark任务日志持久化到HDFS上,可以配置:

"spark.eventLog.dir": "hdfs://10.120.16.127:25000/spark-test/spark-events"

此处如果Hadoop集群有kerberos认证机制,得需要单独处理,处理思路可以是:

  1. 在用户的Spark jar包中自己完成kerberos认证,kerberos的认证过程在hadoop源码中可以看到是一个类静态变量,因此进程共享,任意一次kerberos都可以在不同的configuration/sparkContext生效
  2. 通过Spark的参数配置实现kerberos的认证,无需用户代码

方法一:经过测试可以通过,只需要将kerberos认证的krb5.conf、user.keytab通过docker镜像ADD或configMap volumeMount挂载目录的方式放入容器即可,此处还需要验证一下,kerberos认证是否只在driver上生效,Executor上仍存在问题

方法二:查阅资料应该是不行,原因是kerberos认证本质是需要:

UserGroupInformation.loginUserFromKeytab(principal, keytabFile);

来完成登陆认证的,所以需要再spark-submit之后执行这行语句来完成登陆认证

Spark 2.4.4有如下说明(gitub官方)

Spark supports automatically creating new tokens for these applications when running in YARN mode. 
Kerberos credentials need to be provided to the Spark application via the spark-submit command, 
using the --principal and --keytab parameters.

因此,即便是修改/etc/spark/conf中的properties文件或修改sparkCondfigMap(本质一样),也没有办法实现,因为keytab的入参并不是由–conf参数来指定的;

Spark 3.0有如下说明(gitub官方)

When talking to Hadoop-based services behind Kerberos, it was noted that Spark needs to 
obtain delegation tokens so that non-local processes can authenticate. These delegation 
tokens in Kubernetes are stored in Secrets that are shared by the Driver and its Executors. 
As such, there are three ways of submitting a Kerberos job:

1.Submitting with a $kinit that stores a TGT in the Local Ticket Cache:
/usr/bin/kinit -kt <keytab_file> <username>/<krb5 realm>
/opt/spark/bin/spark-submit \
    --deploy-mode cluster \
    --class org.apache.spark.examples.HdfsTest \
    --master k8s://<KUBERNETES_MASTER_ENDPOINT> \
    --conf spark.executor.instances=1 \
    --conf spark.app.name=spark-hdfs \
    --conf spark.kubernetes.container.image=spark:latest \
    --conf spark.kubernetes.kerberos.krb5.path=/etc/krb5.conf \
    local:///opt/spark/examples/jars/spark-examples_<VERSION>.jar \
    <HDFS_FILE_LOCATION>

2.Submitting with a local Keytab and Principal
    /opt/spark/bin/spark-submit \
    --deploy-mode cluster \
    --class org.apache.spark.examples.HdfsTest \
    --master k8s://<KUBERNETES_MASTER_ENDPOINT> \
    --conf spark.executor.instances=1 \
    --conf spark.app.name=spark-hdfs \
    --conf spark.kubernetes.container.image=spark:latest \
    --conf spark.kerberos.keytab=<KEYTAB_FILE> \
    --conf spark.kerberos.principal=<PRINCIPAL> \
    --conf spark.kubernetes.kerberos.krb5.path=/etc/krb5.conf \
    local:///opt/spark/examples/jars/spark-examples_<VERSION>.jar \
    <HDFS_FILE_LOCATION>

3.Submitting with pre-populated secrets, that contain the Delegation Token, already existing within the namespace
	/opt/spark/bin/spark-submit \
    --deploy-mode cluster \
    --class org.apache.spark.examples.HdfsTest \
    --master k8s://<KUBERNETES_MASTER_ENDPOINT> \
    --conf spark.executor.instances=1 \
    --conf spark.app.name=spark-hdfs \
    --conf spark.kubernetes.container.image=spark:latest \
    --conf spark.kubernetes.kerberos.tokenSecret.name=<SECRET_TOKEN_NAME> \
    --conf spark.kubernetes.kerberos.tokenSecret.itemKey=<SECRET_ITEM_KEY> \
    --conf spark.kubernetes.kerberos.krb5.path=/etc/krb5.conf \
    local:///opt/spark/examples/jars/spark-examples_<VERSION>.jar \
    <HDFS_FILE_LOCATION>

如果升级到Spark3.0版本,可以通过Option2修改spark参数来指定对应krb5以及keytab文件来实现;可惜用的2.4.4版本,所以这种方式在当前环境下没办法用起来,只能考虑方式一来实现kerberos认证;

2020-01-16更新,确认方法一可以解决kerberos认证的问题
还没确认executor是否在链接HDFS的时候是否会有问题,不过通过增加了configMap volume后,driver启动正常,Pi正常计算得到结果,代码中还list了一个HDFS目录,也是正常的,同时spark的log也持久化到了hdfs指定的目录下,这里需要注意的是:

  1. HADOOP_CONF_DIR这个环境只能配置/etc/hadoop/conf,似乎是spark-operator内部做的,没法改
  2. krb5的参数需要单独对JAVA_OPT参数设置,不然找不到该文件
"spark.driver.extraJavaOptions": "-Djava.security.krb5.conf=/etc/hadoop/conf/krb5.conf"
 类似资料: