《kubernetes 1.8.0 测试环境安装部署》
时间:2017-12-19
Deployment
为 Pod
和 ReplicaSet
提供了一个声明式定义declarative
方法,用来替代以前的ReplicationController
来方便的管理应用。
您只需要在 Deployment 中描述您想要的目标状态是什么,Deployment controller 就会帮您将 Pod 和ReplicaSet 的实际状态改变到您的目标状态。您可以定义一个全新的 Deployment 来创建 ReplicaSet 或者删除已有的 Deployment 并创建一个新的来替换。
应用场景包括:
这么理解可以看成Deployment是ReplicaSet的一个状态管理工具,以及相比RC支持更多的特性;
创建Deployment:
apiVersion: apps/v1beta2 # for versions 1.9.0 and later use apps/v1
kind: Deployment
metadata:
name: nginx-deployment-demo
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
metadata.name
:一个名为nginx-deployment-demo的被创建spec.replicas
:该Deployment将创建3个pod副本spec.selector
:改字段声明了deployment如何找到对应的pod进行管理。本例中简单选择一个在pod template中定义的labelapp:nginx
spec.template.spec.containers
:pod模板,定义包含的container name、其所使用的镜像、container暴露的端口等spec.template.spec.metadata.labels
:定义pod的label为app:nginx
应用yaml:
[root@node-131 deployment-demo]# kubectl create -f nginx-deployment-demo.yaml --record
deployment "nginx-deployment-demo" created
--record
参数,用来记录当前命令创建或者升级的资源。这在未来会很有用,例如,查看在每个 Deployment revision 中执行了哪些命令。查看:
[root@node-131 deployment-demo]# kubectl get deployments
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
...
nginx-deployment-demo 3 3 3 3 7m
.spec.minReadySeconds
声明,处于已就绪状态的pod的最少个数)[root@node-131 deployment-demo]# kubectl get rs
NAME DESIRED CURRENT READY AGE
...
nginx-deployment-demo-6d8f46cfb7 3 3 3 10m
ReplicaSet
的命名规范是[DEPLOYMENT-NAME]-[POD-TEMPLATE-HASH-VALUE]
[root@node-131 deployment-demo]# kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
...
nginx-deployment-demo-6d8f46cfb7-4m7lq 1/1 Running 0 11m app=nginx,pod-template-hash=2849027963
nginx-deployment-demo-6d8f46cfb7-rbz5x 1/1 Running 0 11m app=nginx,pod-template-hash=2849027963
nginx-deployment-demo-6d8f46cfb7-xwtfh 1/1 Running 0 11m app=nginx,pod-template-hash=2849027963
...
--show-labels
:用来查看自动为每个pod创建的labelspod template
的label
(在本例中是 app : nginx),不要跟其他的 controller
的 selector
中指定的 pod template label
搞混了(包括 Deployment、Replica Set、Replication Controller
等)。Kubernetes 本身并不会阻止您任意指定 pod template label
,但是如果您真的这么做了,这些 controller
之间会相互打架,并可能导致不正确的行为。pod-template-hash
: 这个label不是指定的且不允许修改,而是在Deployment创建或者接管RS的时候自动为pod添加的,目的是防止Deployment所管理的replicaSet名字重复。更新一个Deployment:
注意: Deployment
的 rollout 当且仅当 Deployment
的 pod template
(例如.spec.template
)中的label
更新或者镜像更改时被触发。其他更新,例如扩容Deployment不会触发 rollout。
假如现在想要让 nginx pods
使用nginx:1.9.1
的镜像来代替原来的nginx:1.7.9
的镜像。
1、kubectl set命令更新镜像设置:
$ kubectl set image deployment/nginx-deployment-demo nginx=nginx:1.9.1
deployment "nginx-deployment" image updated
2、kubectl edit在线修改Deployment yaml,修改 .spec.template.spec.containers[0].image ,将nginx:1.7.9 改写成 nginx:1.9.1:
kubectl edit deployment/nginx-deployment-demo
deployment "nginx-deployment-demo" edited
查看rollout状态:
[root@node-132 ~]# kubectl rollout status deployment/nginx-deployment-demo
Waiting for rollout to finish: 1 out of 3 new replicas have been updated...
Waiting for rollout to finish: 1 out of 3 new replicas have been updated...
Waiting for rollout to finish: 2 out of 3 new replicas have been updated...
Waiting for rollout to finish: 2 out of 3 new replicas have been updated...
Waiting for rollout to finish: 2 out of 3 new replicas have been updated...
Waiting for rollout to finish: 1 old replicas are pending termination...
Waiting for rollout to finish: 1 old replicas are pending termination...
rollout成功后,查看Deployment:
[root@node-132 ~]# kubectl get deployments
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
...
nginx-deployment-demo 3 3 3 3 59m
[root@node-132 ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
...
nginx-deployment-demo-58b94fcb9-869qp 1/1 Running 0 1m
nginx-deployment-demo-58b94fcb9-f7qpz 1/1 Running 0 1m
nginx-deployment-demo-58b94fcb9-ztwpv 1/1 Running 0 1m
...
UP-TO-DATE
: replica 的数目已经达到了配置中要求的数目。CURRENT
:的 replica 数表示 Deployment 管理的 replica 数量,AVAILABLE 的 replica 数是当前可用的replica数量。pod-template-hash
这个label已经变了查看rs:
[root@node-132 ~]# kubectl get rs
NAME DESIRED CURRENT READY AGE
...
nginx-deployment-demo-58b94fcb9 3 3 3 15m
nginx-deployment-demo-6d8f46cfb7 0 0 0 1h
Deployment
更新了Pod
,通过创建一个新的 ReplicaSet
并扩容了3个 replica
,同时将原来的 ReplicaSet
缩容到了0
个 replica
;Deployment
可以保证在升级时只有一定数量的 Pod
是 down 的。默认的,它会确保至少有比期望的Pod
数量少一个是up状态(最多一个不可用)。Deployment
同时也可以确保只创建出超过期望数量的一定数量的 Pod。默认的,它会确保最多比期望的Pod数量多一个的 Pod 是 up 的(最多1个 surge )。查看:
[root@node-132 ~]# kubectl describe deployments nginx-deployment-demo
Name: nginx-deployment-demo
Namespace: default
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 27m deployment-controller Scaled up replica set nginx-deployment-demo-58b94fcb9 to 1
Normal ScalingReplicaSet 26m deployment-controller Scaled down replica set nginx-deployment-demo-6d8f46cfb7 to 2
Normal ScalingReplicaSet 26m deployment-controller Scaled up replica set nginx-deployment-demo-58b94fcb9 to 2
Normal ScalingReplicaSet 26m deployment-controller Scaled down replica set nginx-deployment-demo-6d8f46cfb7 to 1
Normal ScalingReplicaSet 26m deployment-controller Scaled up replica set nginx-deployment-demo-58b94fcb9 to 3
Normal ScalingReplicaSet 26m deployment-controller Scaled down replica set nginx-deployment-demo-6d8f46cfb7 to 0
Rollover (又称为多个updates并发)
每当 Deployment controller
观测到有新的 deployment
被创建时,如果没有已存在的 ReplicaSet
来创建期望个数的 Pod
的话,就会创建出一个新的 ReplicaSet
来做这件事。
已存在的 ReplicaSet
将会控制那些 label
与.spec.selector
匹配,但是 template
跟.spec.template
不匹配的Pod
并将其缩容。最终,新的 ReplicaSet
将会扩容出.spec.replicas
指定数目的 Pod
,旧的ReplicaSet
会缩容到0。
如果您更新了一个的已存在并正在运行中的 Deployment
,每次更新 Deployment都会创建一个新的 ReplicaSet并扩容它,同时回滚之前扩容的 ReplicaSet ——将它添加到旧的 ReplicaSet 列表中,开始缩容。
例如,假如您创建了一个有5个niginx:1.7.9
replica
的 Deployment
,但是当还只有3个nginx:1.7.9
的 replica
创建出来的时候您就开始更新含有5个nginx:1.9.1
replica
的Deployment
。在这种情况下,Deployment
会立即杀掉已创建的3个nginx:1.7.9
的 Pod
,并开始创建nginx:1.9.1
的 Pod
。它不会等到所有的5个nginx:1.7.9
的 Pod
都创建完成后才开始改变路线。
Label selector 更新
我们通常不鼓励更新 label selector
,我们建议事先合理规划 selector。任何情况下,只要您想要执行 label selector 的更新,请一定要谨慎并确认您已经预料到所有可能因此导致的后果。
selector label
的同时在 Deployment
的 spec
中更新新的 pod template label
,否则将返回校验错误。此更改是不可覆盖的,这意味着新的 selector
不会选择使用旧 selector
创建的 eplicaSet
和 Pod
,从而导致所有旧版本的 ReplicaSet
都被丢弃,并创建新的 ReplicaSet
。selector key
的当前值,将导致跟添加selector
同样的后果。Deployment selector
中的已有的key
,不需要对Pod template label
做任何更改,现有的 ReplicaSet
也不会成为孤儿,但是请注意,删除的 label
仍然存在于现有的 Pod
和 ReplicaSet
中。回滚Deployment:
有时候您可能想回滚一个Deployment
,例如,当 Deployment 不稳定时,比如一直 crash looping
。缺省情况下,所有的 Deployment’s rollout 记录将会在kubernetes 系统中保存前两次,以便可以随时回滚(您可以修改revision history limit来更改保存的记录数量)。
注意: 只要
Deployment
的rollout
被触发就会创建一个revision
。意味着当 Deployment 的Pod template
(如.spec.template
)被更改,例如更新template
中的label
或者容器镜像时,就会创建出一个新的revision
。其他的更新,比如扩容Deployment
不会创建revision
——因此我们可以很方便的手动或者自动扩容。这意味着当您回退到历史revision
,只有 Deployment 中的 Pod template 那部分才会回退。
比如我们在更新 Deploymen
t 的时候犯了一个拼写错误,将镜像的名字写成了nginx:1.91,而正确的名字应该是nginx:1.9.1:
$ kubectl set image deployment/nginx-deployment-demo nginx=nginx:1.91
deployment "nginx-deployment-demo" image updated
rollout被卡住:
$ kubectl rollout status deployments nginx-deployment-demo
Waiting for rollout to finish: 1 out of 3 new replicas have been updated...
按 Ctrl-C 停止
查看rs:
[root@node-132 ~]# kubectl get rs
NAME DESIRED CURRENT READY AGE
...
nginx-deployment-demo-58b94fcb9 3 3 3 1h
nginx-deployment-demo-6d8f46cfb7 0 0 0 2h
nginx-deployment-demo-866c748c7c 1 1 0 4m
您会看到旧的 replica(nginx-deployment-demo-58b94fcb9 )数量为3和新的 replica (nginx-deployment-demo-866c748c7c)数目都是1个并且没有ready。
查看pod:
[root@node-132 ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
...
nginx-deployment-demo-866c748c7c-hvg7m 0/1 ErrImagePull 0 6m
...
注意:Deployment controller 将会自动停止异常的rollout,同时停止扩展新的replicaSet。这个特性依赖于 rollingupdate参数(maxUnavailable)的设置。kubernetes缺省设置该值为1
[root@node-132 ~]# kubectl describe deployment nginx-deployment-demo
Name: nginx-deployment-demo
Namespace: default
CreationTimestamp: Tue, 19 Dec 2017 14:13:18 +0800
Labels: app=nginx
Annotations: deployment.kubernetes.io/revision=3
Selector: app=nginx
Replicas: 3 desired | 1 updated | 4 total | 3 available | 1 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Pod Template:
Labels: app=nginx
Containers:
nginx:
Image: nginx:1.91
Port: 80/TCP
Environment: <none>
Mounts: <none>
Volumes: <none>
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
Progressing False ProgressDeadlineExceeded
OldReplicaSets: nginx-deployment-demo-6d8f46cfb7 (3/3 replicas created)
NewReplicaSet: nginx-deployment-demo-866c748c7c (1/1 replicas created)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 55s deployment-controller Scaled up replica set nginx-deployment-demo-6d8f46cfb7 to 3
Normal ScalingReplicaSet 15s deployment-controller Scaled up replica set nginx-deployment-demo-866c748c7c to 1
为了解决这个问题,我们需要回退到之前的稳定的Deployment版本;
检查Deployment的rollout历史:
[root@node-131 deployment-demo]# kubectl rollout history deployment/nginx-deployment-demo
deployments "nginx-deployment-demo"
REVISION CHANGE-CAUSE
1 kubectl create --filename=. --record=true
2 kubectl set image deployment/nginx-deployment-demo nginx=nginx:1.91
CHANGE-CAUSE
为none
,说明create的时候没有加上--record=true
查看单个revision 的详细信息:
[root@node-131 deployment-demo]# kubectl rollout history deployment/nginx-deployment-demo
deployments "nginx-deployment-demo"
REVISION CHANGE-CAUSE
1 kubectl create --filename=. --record=true
2 kubectl set image deployment/nginx-deployment-demo nginx=nginx:1.91
[root@node-131 deployment-demo]# kubectl rollout history deployment/nginx-deployment-demo --revision=2
deployments "nginx-deployment-demo" with revision #2
Pod Template:
Labels: app=nginx
pod-template-hash=4227304737
Annotations: kubernetes.io/change-cause=kubectl set image deployment/nginx-deployment-demo nginx=nginx:1.91
Containers:
nginx:
Image: nginx:1.91
Port: 80/TCP
Environment: <none>
Mounts: <none>
Volumes: <none>
回退到历史版本:
[root@node-131 deployment-demo]# kubectl rollout undo deployment/nginx-deployment-demo --to-revision=1
deployment "nginx-deployment-demo" rolled back
与 rollout 相关的命令详见 rollout
该 Deployment 现在已经回退到了先前的稳定版本。如您所见,Deployment controller产生了一个回退到revison 2的DeploymentRollback的 event。
[root@node-131 deployment-demo]# kubectl describe deployment nginx-deployment-demo
Name: nginx-deployment-demo
Namespace: default
CreationTimestamp: Tue, 19 Dec 2017 16:36:52 +0800
Labels: app=nginx
Annotations: deployment.kubernetes.io/revision=3
kubernetes.io/change-cause=kubectl create --filename=. --record=true
Selector: app=nginx
Replicas: 3 desired | 3 updated | 3 total | 3 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Pod Template:
Labels: app=nginx
Containers:
nginx:
Image: nginx:1.7.9
Port: 80/TCP
Environment: <none>
Mounts: <none>
Volumes: <none>
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
Progressing True NewReplicaSetAvailable
OldReplicaSets: <none>
NewReplicaSet: nginx-deployment-demo-6d8f46cfb7 (3/3 replicas created)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 7m deployment-controller Scaled up replica set nginx-deployment-demo-6d8f46cfb7 to 3
Normal ScalingReplicaSet 6m deployment-controller Scaled up replica set nginx-deployment-demo-866c748c7c to 1
Normal DeploymentRollback 1m deployment-controller Rolled back deployment "nginx-deployment-demo" to revision 1
Normal ScalingReplicaSet 1m deployment-controller Scaled down replica set nginx-deployment-demo-866c748c7c to 0
.spec.revisonHistoryLimit
项来指定 deployment
最多保留多少 revision
历史记录。默认的会保留所有的 revision
;如果将该项设置为0
,Deployment
就不允许回退了。Deployment 扩容:
将Deployment的replicas扩成10:
$ kubectl scale deployment nginx-deployment-demo --replicas 10
deployment "nginx-deployment" scaled
$ kubectl get pods | grep nginx-deployment-demo | wc -l
10
假设您的集群中启用了horizontal pod autoscaling,可以给 Deployment 设置一个 autoscaler,基于当前 Pod的 CPU 利用率选择最少和最多的 Pod 数。
$ kubectl autoscale deployment nginx-deployment-demo --min=10 --max=15 --cpu-percent=80
deployment "nginx-deployment-demo" autoscaled
按比例扩容:
RollingUpdate Deployment 支持同时运行一个应用的多个版本。当认为或者 autoscaler 扩容 RollingUpdate Deployment 的时候,正在中途的 rollout(进行中或者已经暂停的),为了降低风险,Deployment controller 将会平衡已存在的活动中的 ReplicaSet(有 Pod 的 ReplicaSet)和新加入的 replica。这被称为比例扩容。
举个例子,当前运行了一个Deployment的10个replicas,maxSurge=3, and maxUnavailable=2.
[root@node-131 deployment-demo]# kubectl get deploy nginx-deployment-demo
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
nginx-deployment-demo 10 10 10 10 23m
更新一个不存在的镜像:
$ kubectl set image deploy/nginx-deployment-demo nginx=nginx:sometag
deployment "nginx-deployment-demo" image updated
镜像更新出发一个新的rollout(nginx-deployment-demo-6d4fd8f576 ),但是他被阻止在5个replicas,因为之前提到的 maxUnavailable = 5
[root@node-131 deployment-demo]# kubectl get rs | grep nginx-deployment-demo
nginx-deployment-demo-6d4fd8f576 5 5 0 49s
nginx-deployment-demo-6d8f46cfb7 8 8 8 25m
nginx-deployment-demo-866c748c7c 0 0 0 24m
然后发起了一个新的Deployment扩容请求。autoscaler将Deployment的repllica数目增加到了15个。Deployment controller需要判断在哪里增加这5个新的replica。如果我们没有用比例扩容,所有的5个replica都会加到一个新的ReplicaSet中。如果使用比例扩容,新添加的replica将传播到所有的ReplicaSet中。大的部分加入replica数最多的ReplicaSet中,小的部分加入到replica数少的ReplciaSet中。0个replica的ReplicaSet不会被扩容。
此时将Deployment手动扩容成15:
$ kubectl scale deployment nginx-deployment-demo --replicas 15
deployment "nginx-deployment-demo" scaled
查看:
$ kubectl get deploy nginx-deployment-demo
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
nginx-deployment-demo 15 19 7 10 31m
$ kubectl get rs | grep nginx-deployment-demo
nginx-deployment-demo-6d4fd8f576 7 7 0 7m
nginx-deployment-demo-6d8f46cfb7 12 12 12 32m
nginx-deployment-demo-866c748c7c 0 0 0 31m
暂停和恢复Deployment:
可以在发起一个或者多个更新前暂停Deployment,之后在恢复。这样能够在暂停期间进行一些修复工作,而不至于发出不必要的rollout;
利用刚才创建的Deployment:
$ kubectl get deploy nginx-deployment-demo
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
nginx-deployment-demo 15 15 15 15 48m
$ kubectl get rs | grep nginx-deployment-demo
...
nginx-deployment-demo-6d8f46cfb7 15 15 15 49m
暂停Deployment:
$ kubectl rollout pause deployment/nginx-deployment-demo
deployment "nginx-deployment-demo" paused
更新Deployment的镜像:
$ kubectl set image deploy/nginx-deployment-demo nginx=nginx:1.9.1
deployment "nginx-deployment-demo" image updated
查看rollout history:
[root@node-131 deployment-demo]# kubectl rollout history deploy/nginx-deployment-demo
deployments "nginx-deployment-demo"
REVISION CHANGE-CAUSE
4 kubectl scale deployment nginx-deployment-demo --replicas=15
5 kubectl set image deployment/nginx-deployment-demo nginx=nginx:1.91
6 kubectl scale deployment nginx-deployment-demo --replicas=10
可以进行任意多次更新,例如更新使用的资源:
$ kubectl set resources deployment nginx-deployment-demo -c=nginx --limits=cpu=200m,memory=512Mi
deployment "nginx-deployment-demo" resource requirements updated
Deployment 暂停前的初始状态将继续它的功能,而不会对 Deployment 的更新产生任何影响,只要 Deployment是暂停的。
恢复 Deployment,观察完成更新的 ReplicaSet:
[root@node-131 deployment-demo]# kubectl get rs
NAME DESIRED CURRENT READY AGE
...
nginx-deployment-demo-58d97d6f5c 15 15 15 31s
nginx-deployment-demo-6d4fd8f576 0 0 0 41m
nginx-deployment-demo-6d8f46cfb7 0 0 0 1h
nginx-deployment-demo-866c748c7c 0 0 0 1h
Deployment 的状态
Deployment 在生命周期中有多种状态。在创建一个新的 ReplicaSet 的时候它可以是 progressing
状态, complete
状态,或者 fail to progress
状态。
Progressing 状态
Kubernetes 将执行过下列任务之一的 Deployment 标记为 progressing 状态:
可以使用kubectl rollout status
命令监控 Deployment 的进度。
complete 状态
Kubernetes 将包括以下特性的 Deployment 标记为 complete 状态:
可以用kubectl rollout status命令查看 Deployment 是否完成。如果 rollout 成功完成,kubectl rollout status将返回一个0值的 Exit Code。
fail to progress状态
Deployment 在尝试部署新的 ReplicaSet 的时候可能卡住,永远也不会完成。这可能是因为以下几个因素引起的:
探测这种情况的一种方式是,在您的 Deployment spec
中指定spec.progressDeadlineSeconds。spec.progressDeadlineSeconds
表示 Deployment controller
等待多少秒才能确定(通过 Deployment status
)Deployment
进程是卡住的。
下面的kubectl命令设置progressDeadlineSeconds
使 controller
在 Deployment
在进度卡住10分钟后报告:
$ kubectl patch deployment/nginx-deployment-demo -p '{"spec":{"progressDeadlineSeconds":600}}'
"nginx-deployment-demo" patched
当超过截止时间后,Deployment controller
会在 Deployment
的 status.conditions
中增加一条DeploymentCondition
,它包括如下属性:
参考 Kubernetes API conventions查看关于status conditions的更多信息。
注意: kubernetes除了报告Reason=ProgressDeadlineExceeded状态信息外不会对卡住的 Deployment 做任何操作。更高层次的协调器可以利用它并采取相应行动,例如,回滚 Deployment 到之前的版本。
注意: 如果您暂停了一个 Deployment,在暂停的这段时间内kubernetnes不会检查您指定的 deadline。您可以在 Deployment 的 rollout 途中安全的暂停它,然后再恢复它,这不会触发超过deadline的状态。
您可能在使用 Deployment 的时候遇到一些短暂的错误,这些可能是由于您设置了太短的 timeout,也有可能是因为各种其他错误导致的短暂错误。例如,假设您使用了无效的引用。当您 Describe Deployment 的时候可能会注意到如下信息:
$ kubectl describe deployment nginx-deployment
<...>
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
Progressing True ReplicaSetUpdated
ReplicaFailure True FailedCreate
<...>
执行 kubectl get deployment nginx-deployment -o yaml,Deployement 的状态可能看起到这样的信息:
status:
availableReplicas: 2
conditions:
- lastTransitionTime: 2016-10-04T12:25:39Z
lastUpdateTime: 2016-10-04T12:25:39Z
message: Replica set "nginx-deployment-4262182780" is progressing.
reason: ReplicaSetUpdated
status: "True"
type: Progressing
- lastTransitionTime: 2016-10-04T12:25:42Z
lastUpdateTime: 2016-10-04T12:25:42Z
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
- lastTransitionTime: 2016-10-04T12:25:39Z
lastUpdateTime: 2016-10-04T12:25:39Z
message: 'Error creating: pods "nginx-deployment-4262182780-" is forbidden: exceeded quota:
object-counts, requested: pods=1, used: pods=3, limited: pods=2'
reason: FailedCreate
status: "True"
type: ReplicaFailure
observedGeneration: 3
replicas: 2
unavailableReplicas: 2
最后,一旦Deployment
超过了进程的deadline
,kubernetes将会更新状态和导致 Progressing
状态的原因:
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
Progressing False ProgressDeadlineExceeded
ReplicaFailure True FailedCreate
可以通过缩容 Deployment
的方式解决配额不足的问题,或者增加namespace
的配额。如果满足了配额条件后,Deployment controller
将会完成Deployment rollout
,将看到 Deployment
的状态更新为成功状态(Status=True并且Reason=NewReplicaSetAvailable
)。
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
Progressing True NewReplicaSetAvailable
Type=Available
、 Status=True
为Deployment满足最小可用性。 最小可用性是在Deployment
策略中指定的参数。Type=Progressing
、 Status=True
意味着您的Deployment 或者在部署过程中,或者已经成功部署,达到了期望的最少的可用replica数量(查看特定状态的Reason——在我们的例子中Reason=NewReplicaSetAvailable
意味着Deployment已经完成)。您可以使用kubectl rollout status命令查看Deployment进程是否失败。当Deployment过程超过了deadline,kubectl rollout status将返回非0的exit code。
处理失败的Deployment:
所有对完成的 Deployment 的操作都适用于失败的 Deployment。您可以对它扩/缩容,回退到历史版本,您甚至可以多次暂停它来应用新的Deployment pod template。
清理Policy:
您可以设置 Deployment 中的 .spec.revisionHistoryLimit 项来指定保留多少旧的 ReplicaSet。 余下的将在后台被当作垃圾收集。默认的,所有的 revision 历史就都会被保留。在未来的版本中,将会更改为2。
注意: 将该值设置为0,将导致所有的 Deployment 历史记录都会被清除,该 Deployment 就无法再回退了。
至此Deployments 部分算是写完了。。
本系列其他内容:
参考资料:
https://kubernetes.io/docs/concepts/workloads/controllers/deployment/