Kubernetes Volume 官方文档
Kubernetes Volume
nfs-subdir-external-provisioner 官方仓库和文档
nfs-subdir-external-provisioner
旧版的在这里
external-storage
Public archive
之前学习了 NFS 创建、删除和使用,那个应该算是静态方式的 NFS
感觉还是比较简单,这个说是动态方式,暂时也没有搞明白动态体现在哪里
总之,先照着完成,用着用着就应该回明白差别在哪里
clone 代码
git clone https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner/
简单认识一下
cd /root/working/nfs-subdir-external-provisioner/deploy
ls
class.yaml clusterrolebinding.yaml clusterrole.yaml deployment.yaml README.md rolebinding.yaml role.yaml serviceaccount.yaml
deployment.yaml 部署一个服务提供者( provisioner)
class.yaml 定义存储类 ( storage class)
clusterrole.yaml 定义集群角色和规则 ( RBAC )
test-claim.yaml test-pod.yaml 测试用例
object 目录: 里面的文件实际内容和父目录一样,只是拆分的细致一些
README 原文:
The objects in this directory are the same as in the parent except split up into one file per object for certain users' convenience.
配置 provisioner ( Configure the NFS subdir external provisioner )
简单起见,在 default namespace 来完成
使用预先配置好的阿里云镜像
使用阿里云容器镜像的 github关联仓库,海外机器构建 Docker 镜像
registry.cn-beijing.aliyuncs.com/docker-dhbm/nfs-subdir-external-provisioner
修改 IP (192.168.1.188)和 NFS 目录 (/nfs/data)
vim deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nfs-client-provisioner
labels:
app: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: nfs-client-provisioner
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccountName: nfs-client-provisioner
containers:
- name: nfs-client-provisioner
image: registry.cn-beijing.aliyuncs.com/docker-dhbm/nfs-subdir-external-provisioner #registry.cn-hangzhou.aliyuncs.com/xzjs/nfs-subdir-external-provisioner:v4.0.0 # k8s.gcr.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: k8s-sigs.io/nfs-subdir-external-provisioner
- name: NFS_SERVER
value: 192.168.1.188 # 10.3.243.101
- name: NFS_PATH
value: /nfs/data # /ifs/kubernetes
volumes:
- name: nfs-client-root
nfs:
server: 192.168.1.188 # 10.3.243.101
path: /nfs/data # /ifs/kubernetes
依次启动
kubectl create -f rbac.yaml
kubectl create -f class.yaml
kubectl create -f deployment.yaml
这个测试用例也一并启动
kubectl create -f test-claim.yaml -f test-pod.yaml
依次确认资源
kubectl get pods
其中已经包含了测试用例 test-pod ,还有之前静态 NFS 学习时的 demo1
NAME READY STATUS RESTARTS AGE
demo-654c477f6d-l6lbh 1/1 Running 5 (145m ago) 12d
demo1-deployment-67fc75ff95-2k259 1/1 Running 1 (145m ago) 6h19m
demo1-deployment-67fc75ff95-nc5rx 1/1 Running 1 (145m ago) 6h19m
nfs-client-provisioner-554dbf7dd5-qd7hc 1/1 Running 9 (132m ago) 25h
test-pod 0/1 Completed 0 106m
kubectl get pv
其中已经包含了之前静态 NFS 学习时的 nfs-pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
nfs-pv 100Mi RWX Retain Bound default/nfs-pvc nfs-pv 7h3m
pvc-cc92f9b2-327c-4afb-96b6-7e9a90bcb9d3 1Mi RWX Delete Bound default/test-claim managed-nfs-storage 107m
kubectl get pvc
其中已经包含了之前静态 NFS 学习时的 nfs-pvc
test-claim 时测试用例创建的 pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
nfs-pvc Bound nfs-pv 100Mi RWX nfs-pv 7h3m
test-claim Bound pvc-cc92f9b2-327c-4afb-96b6-7e9a90bcb9d3 1Mi RWX managed-nfs-storage 107m
查看 test-pod 挂载情况
cat test-pod.yaml
kind: Pod
apiVersion: v1
metadata:
name: test-pod
spec:
containers:
- name: test-pod
image: busybox:stable
command:
- "/bin/sh"
args:
- "-c"
- "touch /mnt/SUCCESS20211221 && exit 0 || exit 1"
volumeMounts:
- name: nfs-pvc
mountPath: "/mnt"
restartPolicy: "Never"
volumes:
- name: nfs-pvc
persistentVolumeClaim:
claimName: test-claim
可以看到, test-pod 启动后,会在容器中执行 shell 命令 touch /mnt/SUCCESS20211221 ,然后就直接 exit 了,所以, test-pod 的状态时 Completed
查看test-pod 状况
kubectl describe pod test-pod
Name: test-pod
Namespace: default
Priority: 0
Node: centos7-185/192.168.1.185
Start Time: Tue, 21 Dec 2021 01:38:59 -0500
...
Containers:
test-pod:
Container ID: docker://e2bf4fdae908deaf0130b53e29ea6b4ce8b64ff16b6edb8775e65648ff0ba296
Image: busybox:stable
...
Command:
/bin/sh
Args:
-c
touch /mnt/SUCCESS20211221 && exit 0 || exit 1
State: Terminated
...
Environment: <none>
Mounts:
/mnt from nfs-pvc (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4c5nw (ro)
...
Volumes:
nfs-pvc:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: test-claim
ReadOnly: false
...
确认数据
既然容器中创建了一个文件,那么,对应到挂载的 pvc 的本地目录中就应该可以看到实际内容 (尽管 pod 已经 exit)
cd /nfs/data
ls
default-test-claim-pvc-cc92f9b2-327c-4afb-96b6-7e9a90bcb9d3 nginx
cd default-test-claim-pvc-cc92f9b2-327c-4afb-96b6-7e9a90bcb9d3/
ls
SUCCESS20211221
可以看到,在这个目录下确实创建了一个文件: SUCCESS20211221
删除 pod
kubectl delete -f test-pod.yaml -f test-claim.yaml
查看
[root@centos7-188 data]# ll
总用量 0
drwxrwxrwx 2 root root 29 12月 21 01:39 default-test-claim-pvc-cc92f9b2-327c-4afb-96b6-7e9a90bcb9d3
drwxr-xr-x 2 root root 24 12月 20 21:10 nginx
数据依然存在
删除 pvc
kubectl delete -f test-claim.yaml
现在数据消失了!
storage class 的使用规则是: 不会随着 pod 删除而删除数据
查看 test-claim.yaml ,可以看到 archiveOnDelete: “false”
具体 pvc 删除规则,现在不去纠结!
这篇文章写的比较详细
【kubernetes】持久化存储之静态PV/PVC
对比动态和静态 pvc 创建方式,可以说完全没有差别
动态 pvc
spec:
storageClassName: managed-nfs-storage
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Mi
静态 pvc
spec:
storageClassName: nfs-pv
accessModes:
- ReadWriteMany
resources:
requests:
storage: 50Mi #容量
对比静态 pv 和动态 pv创建
动态 pv
# cat class.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: managed-nfs-storage
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner # or choose another name, must match deployment's env PROVISIONER_NAME'
parameters:
archiveOnDelete: "false"
1). 没有直接创建 PersistentVolume ,而是指定了 StorageClass 的 provisioner
2). 这个 provisioner 的创建就比静态 pv 复杂多了!
3). 这个nfs-subdir-external-provisioner 是通过创建一个 pod ,在 pod 中挂载 nfs 资源
4). 后续在 pvc 创建的时候,动态创建了一个 pv,为什么要这样子?
5). 简单理解:pvc 要求的空间还不定多大,没法预先创建 pv
6). 虽然以上例子pvc 只需要 1M ,但是,这个 pv 后续还可能会被其他 pvc 申请使用!
静态 pv
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-pv
namespace: default
labels:
pv: nfs-pv
spec:
capacity:
storage: 100Mi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
storageClassName: nfs-pv
nfs:
server: 192.168.0.141
path: "/nfs/data/nginx" #NFS目录,需要该目录在NFS上存在
这里直接指定了空间(100M)和读写方式,pvc 申请不能超出 pv 的规格!