我试图利用k8s守护程序集的滚动更新来在守护程序集的spec.template字段发生更改时进行自动滚动更新。我故意为豆荚放了一个无效的映像,这样豆荚就不能正确启动了。我认为,当不可用豆荚的数量超过Maxunavailable
中定义的数量时,可以停止滚动更新。不幸的是,它不会发生,并且在所有豆荚进入Crashloopbackoff之前,豆荚一直在更新。
NAME STATUS ROLES AGE VERSION wdc-rdops-vm05-dhcp-74-190 Ready <none> 65d v1.18.0 wdc-rdops-vm05-dhcp-86-61 Ready master 65d v1.18.0 wdc-rdops-vm05-dhcp-93-214 Ready <none> 65d v1.18.0
正如线程中所建议的,我添加了
spec:
minReadySeconds: 120
为了确保容器运行良好,设置pod可用或不可用状态。
然而,最后的3个吊舱崩溃了
nsx-system nsx-node-agent-9cl2v 0/3 CrashLoopBackOff 3 23s
nsx-system nsx-node-agent-c95wb 3/3 Running 3 11m
nsx-system nsx-node-agent-p58vs 3/3 Running 3 11m
nsx-system nsx-node-agent-9cl2v 0/3 CrashLoopBackOff 45 15m
nsx-system nsx-node-agent-6mlmq 0/3 CrashLoopBackOff 48 2m46s
nsx-system nsx-node-agent-9fzcc 0/3 CrashLoopBackOff 57 2m59s
apiVersion: apps/v1 kind: DaemonSet metadata: creationTimestamp: "2021-02-21T11:28:03Z" generation: 101 labels: component: nsx-node-agent tier: nsx-networking version: v1 managedFields: - apiVersion: apps/v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: .: {} f:deprecated.daemonset.template.generation: {} f:kubectl.kubernetes.io/last-applied-configuration: {} f:labels: .: {} f:component: {} f:tier: {} f:version: {} f:spec: f:revisionHistoryLimit: {} f:selector: f:matchLabels: .: {} f:component: {} f:tier: {} f:version: {} f:template: f:metadata: f:annotations: .: {} f:container.apparmor.security.beta.kubernetes.io/nsx-node-agent: {} f:labels: .: {} f:component: {} f:tier: {} f:version: {} f:spec: f:containers: k:{"name":"nsx-kube-proxy"}: .: {} f:command: {} f:env: .: {} k:{"name":"CONTAINER_NAME"}: .: {} f:name: {} f:value: {} k:{"name":"POD_NAME"}: .: {} f:name: {} f:valueFrom: .: {} f:fieldRef: .: {} f:apiVersion: {} f:fieldPath: {} f:imagePullPolicy: {} f:livenessProbe: .: {} f:exec: .: {} f:command: {} f:failureThreshold: {} f:initialDelaySeconds: {} f:periodSeconds: {} f:successThreshold: {} f:timeoutSeconds: {} f:name: {} f:resources: {} f:securityContext: .: {} f:capabilities: .: {} f:add: {} f:terminationMessagePath: {} f:terminationMessagePolicy: {} f:volumeMounts: .: {} k:{"mountPath":"/etc/nsx-ujo"}: .: {} f:mountPath: {} f:name: {} f:readOnly: {} k:{"mountPath":"/var/log/nsx-ujo"}: .: {} f:mountPath: {} k:{"mountPath":"/var/run/openvswitch"}: .: {} f:mountPath: {} f:name: {} k:{"name":"nsx-node-agent"}: .: {} f:command: {} f:env: .: {} k:{"name":"CONTAINER_NAME"}: .: {} f:name: {} f:value: {} k:{"name":"POD_NAME"}: .: {} f:name: {} f:valueFrom: .: {} f:fieldRef: .: {} f:apiVersion: {} f:fieldPath: {} f:imagePullPolicy: {} f:livenessProbe: .: {} f:exec: {} f:failureThreshold: {} f:initialDelaySeconds: {} f:periodSeconds: {} f:successThreshold: {} f:timeoutSeconds: {} f:name: {} f:resources: {} f:securityContext: .: {} f:capabilities: .: {} f:add: {} f:terminationMessagePath: {} f:terminationMessagePolicy: {} f:volumeMounts: .: {} k:{"mountPath":"/etc/nsx-ujo"}: .: {} f:mountPath: {} f:name: {} f:readOnly: {} k:{"mountPath":"/host/etc/os-release"}: .: {} f:mountPath: {} f:name: {} f:readOnly: {} k:{"mountPath":"/host/proc"}: .: {} f:mountPath: {} f:name: {} f:readOnly: {} k:{"mountPath":"/host/var/run/netns"}: .: {} f:mountPath: {} f:mountPropagation: {} f:name: {} k:{"mountPath":"/var/lib/kubelet/device-plugins/"}: .: {} f:mountPath: {} f:name: {} f:readOnly: {} k:{"mountPath":"/var/log/nsx-ujo"}: .: {} f:mountPath: {} k:{"mountPath":"/var/run/nsx-ujo"}: .: {} f:mountPath: {} f:name: {} k:{"mountPath":"/var/run/openvswitch"}: .: {} f:mountPath: {} f:name: {} k:{"name":"nsx-ovs"}: .: {} f:command: {} f:imagePullPolicy: {} f:livenessProbe: .: {} f:exec: .: {} f:command: {} f:failureThreshold: {} f:initialDelaySeconds: {} f:periodSeconds: {} f:successThreshold: {} f:timeoutSeconds: {} f:name: {} f:resources: {} f:securityContext: .: {} f:capabilities: .: {} f:add: {} f:terminationMessagePath: {} f:terminationMessagePolicy: {} f:volumeMounts: .: {} k:{"mountPath":"/etc/nsx-ujo"}: .: {} f:mountPath: {} f:name: {} f:readOnly: {} k:{"mountPath":"/etc/openvswitch"}: .: {} f:mountPath: {} f:name: {} f:subPath: {} k:{"mountPath":"/host/etc/openvswitch"}: .: {} f:mountPath: {} f:name: {} k:{"mountPath":"/host/etc/os-release"}: .: {} f:mountPath: {} f:name: {} f:readOnly: {} k:{"mountPath":"/lib/modules"}: .: {} f:mountPath: {} f:name: {} f:readOnly: {} k:{"mountPath":"/sys"}: .: {} f:mountPath: {} f:name: {} f:readOnly: {} k:{"mountPath":"/var/log/nsx-ujo"}: .: {} f:mountPath: {} k:{"mountPath":"/var/log/openvswitch"}: .: {} f:mountPath: {} f:name: {} f:subPath: {} k:{"mountPath":"/var/run/openvswitch"}: .: {} f:mountPath: {} f:name: {} f:dnsPolicy: {} f:hostNetwork: {} f:restartPolicy: {} f:schedulerName: {} f:securityContext: {} f:serviceAccount: {} f:serviceAccountName: {} f:terminationGracePeriodSeconds: {} f:tolerations: {} f:volumes: .: {} k:{"name":"device-plugins"}: .: {} f:hostPath: .: {} f:path: {} f:type: {} f:name: {} k:{"name":"host-modules"}: .: {} f:hostPath: .: {} f:path: {} f:type: {} f:name: {} k:{"name":"host-original-ovs-db"}: .: {} f:hostPath: .: {} f:path: {} f:type: {} f:name: {} k:{"name":"host-os-release"}: .: {} f:hostPath: .: {} f:path: {} f:type: {} f:name: {} k:{"name":"host-sys"}: .: {} f:hostPath: .: {} f:path: {} f:type: {} f:name: {} k:{"name":"host-var-log-ujo"}: .: {} f:hostPath: .: {} f:path: {} f:type: {} f:name: {} k:{"name":"netns"}: .: {} f:hostPath: .: {} f:path: {} f:type: {} f:name: {} k:{"name":"openvswitch"}: .: {} f:hostPath: .: {} f:path: {} f:type: {} f:name: {} k:{"name":"proc"}: .: {} f:hostPath: .: {} f:path: {} f:type: {} f:name: {} k:{"name":"projected-volume"}: .: {} f:name: {} f:projected: .: {} f:defaultMode: {} f:sources: {} k:{"name":"var-run-ujo"}: .: {} f:hostPath: .: {} f:path: {} f:type: {} f:name: {} f:updateStrategy: f:rollingUpdate: .: {} f:maxUnavailable: {} f:type: {} manager: kubectl operation: Update time: "2021-04-19T08:07:54Z" - apiVersion: apps/v1 fieldsType: FieldsV1 fieldsV1: f:spec: f:minReadySeconds: {} f:template: f:spec: f:containers: k:{"name":"nsx-kube-proxy"}: f:image: {} f:volumeMounts: k:{"mountPath":"/var/log/nsx-ujo"}: f:name: {} k:{"name":"nsx-node-agent"}: f:image: {} f:livenessProbe: f:exec: f:command: {} f:volumeMounts: k:{"mountPath":"/var/log/nsx-ujo"}: f:name: {} k:{"name":"nsx-ovs"}: f:image: {} f:volumeMounts: k:{"mountPath":"/var/log/nsx-ujo"}: f:name: {} f:status: f:desiredNumberScheduled: {} manager: nsx-ncp-operator operation: Update time: "2021-04-27T10:01:23Z" - apiVersion: apps/v1 fieldsType: FieldsV1 fieldsV1: f:status: f:currentNumberScheduled: {} f:numberReady: {} f:numberUnavailable: {} f:observedGeneration: {} f:updatedNumberScheduled: {} manager: kube-controller-manager operation: Update time: "2021-04-27T10:15:28Z" name: nsx-node-agent namespace: nsx-system resourceVersion: "14594084" selfLink: /apis/apps/v1/namespaces/nsx-system/daemonsets/nsx-node-agent uid: e3dd0951-1b31-4095-8c27-56ec9780d94e spec: minReadySeconds: 120 revisionHistoryLimit: 10 selector: matchLabels: component: nsx-node-agent tier: nsx-networking version: v1 template: metadata: annotations: container.apparmor.security.beta.kubernetes.io/nsx-node-agent: localhost/node-agent-apparmor creationTimestamp: null labels: component: nsx-node-agent tier: nsx-networking version: v1 spec: containers: - command: - start_node_agent env: - name: POD_NAME valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.name - name: CONTAINER_NAME value: nsx-node-agent image: registry.access.redhat.com/ubi8/ubi:latest imagePullPolicy: IfNotPresent livenessProbe: exec: command: - /bin/sh - -c - check_pod_liveness nsx-node-agent 5 failureThreshold: 5 initialDelaySeconds: 60 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 5 name: nsx-node-agent resources: {} securityContext: capabilities: add: - NET_ADMIN - SYS_ADMIN - SYS_PTRACE - DAC_READ_SEARCH - NET_RAW - AUDIT_WRITE terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /etc/nsx-ujo name: projected-volume readOnly: true - mountPath: /var/run/openvswitch name: openvswitch - mountPath: /var/run/nsx-ujo name: var-run-ujo - mountPath: /host/var/run/netns mountPropagation: HostToContainer name: netns - mountPath: /host/proc name: proc readOnly: true - mountPath: /var/lib/kubelet/device-plugins/ name: device-plugins readOnly: true - mountPath: /host/etc/os-release name: host-os-release readOnly: true - mountPath: /var/log/nsx-ujo name: host-var-log-ujo - command: - start_kube_proxy env: - name: POD_NAME valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.name - name: CONTAINER_NAME value: nsx-kube-proxy image: registry.access.redhat.com/ubi8/ubi:latest imagePullPolicy: IfNotPresent livenessProbe: exec: command: - /bin/sh - -c - check_pod_liveness nsx-kube-proxy 5 failureThreshold: 5 initialDelaySeconds: 10 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 5 name: nsx-kube-proxy resources: {} securityContext: capabilities: add: - NET_ADMIN - SYS_ADMIN - SYS_PTRACE - DAC_READ_SEARCH - NET_RAW - AUDIT_WRITE terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /etc/nsx-ujo name: projected-volume readOnly: true - mountPath: /var/run/openvswitch name: openvswitch - mountPath: /var/log/nsx-ujo name: host-var-log-ujo - command: - start_ovs image: registry.access.redhat.com/ubi8/ubi:latest imagePullPolicy: IfNotPresent livenessProbe: exec: command: - /bin/sh - -c - check_pod_liveness nsx-ovs 10 failureThreshold: 3 initialDelaySeconds: 10 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 10 name: nsx-ovs resources: {} securityContext: capabilities: add: - NET_ADMIN - SYS_ADMIN - SYS_NICE - SYS_MODULE terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /etc/nsx-ujo name: projected-volume readOnly: true - mountPath: /etc/openvswitch name: var-run-ujo subPath: openvswitch-db - mountPath: /var/run/openvswitch name: openvswitch - mountPath: /sys name: host-sys readOnly: true - mountPath: /host/etc/openvswitch name: host-original-ovs-db - mountPath: /lib/modules name: host-modules readOnly: true - mountPath: /host/etc/os-release name: host-os-release readOnly: true - mountPath: /var/log/openvswitch name: host-var-log-ujo subPath: openvswitch - mountPath: /var/log/nsx-ujo name: host-var-log-ujo dnsPolicy: ClusterFirst hostNetwork: true restartPolicy: Always schedulerName: default-scheduler securityContext: {} serviceAccount: nsx-node-agent-svc-account serviceAccountName: nsx-node-agent-svc-account terminationGracePeriodSeconds: 60 tolerations: - effect: NoSchedule key: node-role.kubernetes.io/master - effect: NoSchedule key: node.kubernetes.io/not-ready - effect: NoSchedule key: node.kubernetes.io/unreachable volumes: - name: projected-volume projected: defaultMode: 420 sources: - configMap: items: - key: ncp.ini path: ncp.ini name: nsx-node-agent-config - configMap: items: - key: version path: VERSION name: nsx-ncp-version-config - hostPath: path: /var/run/openvswitch type: "" name: openvswitch - hostPath: path: /var/run/nsx-ujo type: "" name: var-run-ujo - hostPath: path: /var/run/netns type: "" name: netns - hostPath: path: /proc type: "" name: proc - hostPath: path: /var/lib/kubelet/device-plugins/ type: "" name: device-plugins - hostPath: path: /var/log/nsx-ujo type: DirectoryOrCreate name: host-var-log-ujo - hostPath: path: /sys type: "" name: host-sys - hostPath: path: /lib/modules type: "" name: host-modules - hostPath: path: /etc/openvswitch type: "" name: host-original-ovs-db - hostPath: path: /etc/os-release type: "" name: host-os-release updateStrategy: rollingUpdate: maxUnavailable: 1 type: RollingUpdate status: currentNumberScheduled: 3 desiredNumberScheduled: 3 numberMisscheduled: 0 numberReady: 0 numberUnavailable: 3 observedGeneration: 101 updatedNumberScheduled: 3
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
nsx-node-agent 3 3 0 3 0 <none> 64d
似乎,K8的滚动更新策略没有遵循定义的规范?它不应该允许这种情况在滚动更新时发生。
我在你的清单上没有看到准备探测。如果没有就绪探测,Kubernetes将在进程运行时立即认为一个豆荚“就绪”,并在滚动更新期间继续终止其他豆荚。
Maxunavailable
设置为1的pod上的失败就绪探测应该停止更新-但是如果没有这样的探测,就不会通知集群pod实际上还没有准备好接受通信量。
我正在使用Hibernate Envers来审核我的日志表,它是使用Spring配置的。但是,在执行更新、修改或删除操作时,不会发生审核。以下是配置。 Spring配置 我已经将@Audited注释添加到实体类中。我使用的是Hibernate core 3.5.0-Final和envers的相同版本。 当我检查hibernate记录的SQL时,我可以看到更新查询已经执行,但没有任何将数据插入审计表
我想安装Primeng7.0.0到我的Angular项目,但首先我需要更新我的JHipster到最后一个版本。 当我将此命令写入terminal时,我得到以下错误
问题内容: 我有两个定义如下的实体bean(删除了无关的东西): CriticalItems定义如下: 在我的DAO代码中-我有以下方法: 当我加载MasterItem时,它会正确加载,并且还会按照指示加载CriticalItems Set中的数据。然后,我将此数据发送到我的UI,并获得更新的副本,然后尝试将其保留。用户更新MasterItem对象中的字段,但不触摸CriticalItems Se
与其他包管理器(如)相比,我发现当更新与给定项目相关的包时,有一种奇怪的行为。 还根据留档,和选项 根据composer.json将依赖项升级到最新版本,并更新composer.lock文件。 事实上,正确地更新了新的包版本号。但是没有被修改,并且列出了旧的版本过低的包。 为什么会发生这种情况?是我做错了什么,还是这就是应该怎么做的?如果是这样的话,为什么两个文件中的一个是最新的,而另一个不是最新
我正在尝试从系统中读取。输入是这样的:第一个数字是测试用例的数量,然后是整数,然后是一些行。 问题是,尽管循环中存在条件,但当我完成最后一个测试时,while循环不会退出。
几年前我安装了Java版本10.0.2,但为了特定的目的需要将其更新到11.0.2(不是最新的版本13.0.2,我需要它是11.0.2)。 下载JDK-11.0.2之后,我打开了环境变量窗口,并进行了如下更改。 添加新路径