当前位置: 首页 > 知识库问答 >
问题:

Google Kubernetes引擎:在实例中看不到挂载持久卷

龚德本
2023-03-14

    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: my-volume
    spec:
      capacity:
        storage: 200Gi
      accessModes:
        - ReadWriteOnce
      gcePersistentDisk:
        pdName: my-disk
        fsType: ext4

然后创建一个PersistentVolumeClaim


    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: my-claim
    spec:
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 200Gi

然后创建一个StatefulSet,并将卷装入/mnt/disks,这是一个现有的目录。stateFulset.yaml:


    apiVersion: apps/v1beta2
    kind: StatefulSet
    metadata:
      name: ...
    spec:
        ...
        spec:
          containers:
          - name: ...
            ...
            volumeMounts:
            - name: my-volume
              mountPath: /mnt/disks
          volumes:
          - name: my-volume
            emptyDir: {}
      volumeClaimTemplates:
      - metadata:
          name: my-claim
        spec:
          accessModes: [ "ReadWriteOnce" ]
          resources:
            requests:
              storage: 200Gi

我运行命令Kubectl get pv并看到磁盘已成功地挂载到每个实例


    NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM                                    STORAGECLASS   REASON    AGE
    my-volume                                  200Gi      RWO            Retain           Available                                                                     19m
    pvc-17c60f45-2e4f-11e8-9b77-42010af0000e   200Gi      RWO            Delete           Bound       default/my-claim-xxx_1   standard                 13m
    pvc-5972c804-2e4e-11e8-9b77-42010af0000e   200Gi      RWO            Delete           Bound       default/my-claim                         standard                 18m
    pvc-61b9daf9-2e4e-11e8-9b77-42010af0000e   200Gi      RWO            Delete           Bound       default/my-claimxxx_0   standard                 18m


    Filesystem     Type      Size  Used Avail Use% Mounted on
    /dev/root      ext2      1.2G  447M  774M  37% /
    devtmpfs       devtmpfs  1.9G     0  1.9G   0% /dev
    tmpfs          tmpfs     1.9G     0  1.9G   0% /dev/shm
    tmpfs          tmpfs     1.9G  744K  1.9G   1% /run
    tmpfs          tmpfs     1.9G     0  1.9G   0% /sys/fs/cgroup
    tmpfs          tmpfs     1.9G     0  1.9G   0% /tmp
    tmpfs          tmpfs     256K     0  256K   0% /mnt/disks
    /dev/sda8      ext4       12M   28K   12M   1% /usr/share/oem
    /dev/sda1      ext4       95G  3.5G   91G   4% /mnt/stateful_partition
    tmpfs          tmpfs     1.0M  128K  896K  13% /var/lib/cloud
    overlayfs      overlay   1.0M  148K  876K  15% /etc


    docker build -t gcr.io/xxx .
    gcloud docker -- push gcr.io/xxx
    kubectl create -f statefulset.yaml

我插入的实例是运行docker映像的实例。我在实例和docker容器中都看不到卷

我找到了卷,在实例中运行df-aht,并看到了相关条目


    /dev/sdb       -               -     -     -    - /var/lib/kubelet/plugins/kubernetes.io/gce-pd/mounts/gke-xxx-cluster-c-pvc-61b9daf9-2e4e-11e8-9b77-42010af0000e
    /dev/sdb       -               -     -     -    - /var/lib/kubelet/plugins/kubernetes.io/gce-pd/mounts/gke-xxx-cluster-c-pvc-61b9daf9-2e4e-11e8-9b77-42010af0000e
    /dev/sdb       -               -     -     -    - /home/kubernetes/containerized_mounter/rootfs/var/lib/kubelet/plugins/kubernetes.io/gce-pd/mounts/gke-xxx-cluster-c-pvc-61b9daf9-2e4e-11e8-9b77-42010af0000e
    /dev/sdb       -               -     -     -    - /home/kubernetes/containerized_mounter/rootfs/var/lib/kubelet/plugins/kubernetes.io/gce-pd/mounts/gke-xxx-cluster-c-pvc-61b9daf9-2e4e-11e8-9b77-42010af0000e
    /dev/sdb       -               -     -     -    - /var/lib/kubelet/pods/61bb679b-2e4e-11e8-9b77-42010af0000e/volumes/kubernetes.io~gce-pd/pvc-61b9daf9-2e4e-11e8-9b77-42010af0000e
    /dev/sdb       -               -     -     -    - /var/lib/kubelet/pods/61bb679b-2e4e-11e8-9b77-42010af0000e/volumes/kubernetes.io~gce-pd/pvc-61b9daf9-2e4e-11e8-9b77-42010af0000e
    /dev/sdb       -               -     -     -    - /home/kubernetes/containerized_mounter/rootfs/var/lib/kubelet/pods/61bb679b-2e4e-11e8-9b77-42010af0000e/volumes/kubernetes.io~gce-pd/pvc-61b9daf9-2e4e-11e8-9b77-42010af0000e
    /dev/sdb       -               -     -     -    - /home/kubernetes/containerized_mounter/rootfs/var/lib/kubelet/pods/61bb679b-2e4e-11e8-9b77-42010af0000e/volumes/kubernetes.io~gce-pd/pvc-61b9daf9-2e4e-11e8-9b77-42010af0000e

然后我进入docker容器并运行df-aht,我得到了


    Filesystem     Type     Size  Used Avail Use% Mounted on
    /dev/sda1      ext4      95G  3.5G   91G   4% /mnt/disks


    Name:           xxx-replicaset-0
    Namespace:      default
    Node:           gke-xxx-cluster-default-pool-5e49501c-nrzt/10.128.0.17
    Start Time:     Fri, 23 Mar 2018 11:40:57 -0400
    Labels:         app=xxx-replicaset
                    controller-revision-hash=xxx-replicaset-755c4f7cff
    Annotations:    kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"StatefulSet","namespace":"default","name":"xxx-replicaset","uid":"d6c3511f-2eaf-11e8-b14e-42010af0000...
                    kubernetes.io/limit-ranger=LimitRanger plugin set: cpu request for container xxx-deployment
    Status:         Running
    IP:             10.52.4.5
    Created By:     StatefulSet/xxx-replicaset
    Controlled By:  StatefulSet/xxx-replicaset
    Containers:
      xxx-deployment:
        Container ID:   docker://137b3966a14538233ed394a3d0d1501027966b972d8ad821951f53d9eb908615
        Image:          gcr.io/sampeproject/xxxstaging:v1
        Image ID:       docker-pullable://gcr.io/sampeproject/xxxstaging@sha256:a96835c2597cfae3670a609a69196c6cd3d9cc9f2f0edf5b67d0a4afdd772e0b
        Port:           8080/TCP
        State:          Running
          Started:      Fri, 23 Mar 2018 11:42:17 -0400
        Ready:          True
        Restart Count:  0
        Requests:
          cpu:        100m
        Environment:  
        Mounts:
          /mnt/disks from my-volume (rw)
          /var/run/secrets/kubernetes.io/serviceaccount from default-token-hj65g (ro)
    Conditions:
      Type           Status
      Initialized    True
      Ready          True
      PodScheduled   True
    Volumes:
      my-claim:
        Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
        ClaimName:  my-claim-xxx-replicaset-0
        ReadOnly:   false
      my-volume:
        Type:    EmptyDir (a temporary directory that shares a pod's lifetime)
        Medium:
      default-token-hj65g:
        Type:        Secret (a volume populated by a Secret)
        SecretName:  default-token-hj65g
        Optional:    false
    QoS Class:       Burstable
    Node-Selectors:  
    Tolerations:     node.alpha.kubernetes.io/notReady:NoExecute for 300s
                     node.alpha.kubernetes.io/unreachable:NoExecute for 300s
    Events:
      Type     Reason                 Age                From                                                      Message
      ----     ------                 ----               ----                                                      -------
      Warning  FailedScheduling       10m (x4 over 10m)  default-scheduler                                         PersistentVolumeClaim is not bound: "my-claim-xxx-replicaset-0" (repeated 5 times)
      Normal   Scheduled              9m                 default-scheduler                                         Successfully assigned xxx-replicaset-0 to gke-xxx-cluster-default-pool-5e49501c-nrzt
      Normal   SuccessfulMountVolume  9m                 kubelet, gke-xxx-cluster-default-pool-5e49501c-nrzt  MountVolume.SetUp succeeded for volume "my-volume"
      Normal   SuccessfulMountVolume  9m                 kubelet, gke-xxx-cluster-default-pool-5e49501c-nrzt  MountVolume.SetUp succeeded for volume "default-token-hj65g"
      Normal   SuccessfulMountVolume  9m                 kubelet, gke-xxx-cluster-default-pool-5e49501c-nrzt  MountVolume.SetUp succeeded for volume "pvc-902c57c5-2eb0-11e8-b14e-42010af0000e"
      Normal   Pulling                9m                 kubelet, gke-xxx-cluster-default-pool-5e49501c-nrzt  pulling image "gcr.io/sampeproject/xxxstaging:v1"
      Normal   Pulled                 8m                 kubelet, gke-xxx-cluster-default-pool-5e49501c-nrzt  Successfully pulled image "gcr.io/sampeproject/xxxstaging:v1"
      Normal   Created                8m                 kubelet, gke-xxx-cluster-default-pool-5e49501c-nrzt  Created container
      Normal   Started                8m                 kubelet, gke-xxx-cluster-default-pool-5e49501c-nrzt  Started container

NAME    MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT

    sda       8:0    0  100G  0 disk 
    ├─sda1    8:1    0 95.9G  0 part /mnt/disks
    ├─sda2    8:2    0   16M  0 part 
    ├─sda3    8:3    0    2G  0 part 
    ├─sda4    8:4    0   16M  0 part 
    ├─sda5    8:5    0    2G  0 part 
    ├─sda6    8:6    0  512B  0 part 
    ├─sda7    8:7    0  512B  0 part 
    ├─sda8    8:8    0   16M  0 part 
    ├─sda9    8:9    0  512B  0 part 
    ├─sda10   8:10   0  512B  0 part 
    ├─sda11   8:11   0    8M  0 part 
    └─sda12   8:12   0   32M  0 part 
    sdb       8:16   0  200G  0 disk 

为什么会这样?

共有1个答案

赵夕
2023-03-14

当您使用PVCs时,K8s为您管理持久磁盘。

在存储类中,PVs可以通过设置器定义的确切方式。由于使用GKE,默认SC使用kubernetes.io/gce-pd provisioner(https://kubernetes.io/docs/concepts/storage/storage-classes/#gce)。

换句话说,为每个pod创建新的PV。

 类似资料:
  • 它向我展示了错误: 装入:错误的fs类型、错误的选项、/dev/xvdf上错误的超级块、缺少codepage或helper程序或其他错误 的输出如下所示

  • 我正在使用网络逻辑10.3。我正在尝试配置一个持久订阅,其中包含由 jdbc 存储(在 Oracle DB 中)支持的持久消息。我有一个主题,MDB 正在作为持久订阅者侦听该主题。在场景-1下:如果我发送消息,它会命中MDB。 在场景2中:我挂起了MDB,希望发送到主题的消息只要不被MDB(它是唯一注册的持久订阅者)使用,就会一直存在。但是当我向主题发送消息时,它短暂地出现在那里,然后就消失了(我

  • 问题内容: 我有一个托管bean,其中包含当前页面的实体对象列表。在我创建一个新对象并在事务中使用persist()将其持久保存到数据库之后;在另一个事务中,当我调用merge时(由于该实体由于先前的事务提交而处于分离状态);实体管理器无法在持久性上下文中找到对象,并向数据库抛出选择查询。我是否缺少某些东西,或者是正常行为? 更新:当我使用mysql数据库和自动生成的ID列时,存在上述问题。当我在

  • 本文向大家介绍Unity3D游戏引擎实现在Android中打开WebView的实例,包括了Unity3D游戏引擎实现在Android中打开WebView的实例的使用技巧和注意事项,需要的朋友参考一下 本文讲述了如何在Unity中调用Android中的WebView组件,实现内部浏览器样式的页面切换。首先打开Eclipse创建一个Android的工程: UnityTestActivity.java

  • 我有一些文件是从Evernote API接收的(通过),并使用以下代码写入Google Cloud Storage: 即使对于某些类型的文档,它仍然有效。但GCS在日志中记录了某些文件: 和 这些错误似乎没有任何模式。它发生在文档、声音、图片等任何类型的文档中,其中一些文档类型有效,而另一些文档类型无效。这与大小无关(因为一些小文档和一些大文档都有效)。 有什么想法吗? 这是完整的堆栈跟踪,尽管我

  • 问题内容: 我当时正在innoDB表中测试事务支持,只是出于好奇,我试图在MyIsam表上运行同一事务,但令人惊讶的是它起作用了。我假设在myIsam表上的查询是一个接一个地执行的,而不是在一个原子操作中执行的,并且我不会从START TRANSACTION以及COMMIT和ROLLBACK操作中得到任何错误。我有兴趣,是MyIsam引擎只是忽略此操作还是执行某些操作? 问题答案: MyISAM有