当前位置: 首页 > 文档资料 > Ceph 运维手册 >

第三部分:Ceph 进阶 - 10. 查看使用 RBD 镜像的客户端

优质
小牛编辑
138浏览
2023-12-01

有时候删除 rbd image 会提示当前 rbd image 正在使用中,无法删除:

  1. rbd rm foo
  2. 2016-11-09 20:16:14.018332 7f81877a07c0 -1 librbd: image has watchers - not removing
  3. Removing image: 0% complete...failed.
  4. rbd: error: image still has watchers
  5. This means the image is still open or the client using it crashed. Try again after closing/unmapping it or waiting 30s for the crashed client to timeout.

所以希望能查看到底是谁在使用 rbd image。

对于 rbd image 的使用,目前主要有两种形式:内核模块 map 后再 mount ;通过 libvirt 等供给虚拟机使用。都是利用 rados listwatchers 去查看,只是两种形式下需要读取的文件不一样。

内核模块 map

查看方法如下:

  1. # 查看 rbd pool 中的 image
  2. root@ceph1:~# rbd ls
  3. foo
  4. ltest
  5. test
  6. # 查看 foo 的使用者
  7. root@ceph1:~# rados -p rbd listwatchers foo.rbd
  8. watcher=10.202.0.82:0/1760414582 client.1332795 cookie=1
  9. # 去对应主机上检查
  10. root@ceph2:~#rbd showmapped |grep foo
  11. 0 rbd foo - /dev/rbd0

虚拟机通过 libvirt 使用 rbd image

查看方法如下:

  1. # 查看该虚拟机卷的信息
  2. root@ceph1:~# rbd info volumes/volume-ee0c4077-a607-4bc9-a8cf-e893837361f3
  3. rbd image 'volume-ee0c4077-a607-4bc9-a8cf-e893837361f3':
  4. size 1024 MB in 256 objects
  5. order 22 (4096 kB objects)
  6. block_name_prefix: rbd_data.13c3277850f538
  7. format: 2
  8. features: layering, striping
  9. flags:
  10. parent: images/3601459f-060b-460f-b73b-db74237d922e@snap
  11. overlap: 40162 kB
  12. stripe unit: 4096 kB
  13. stripe count: 1
  14. # 查看该 image 的使用者
  15. root@ceph1:~# rados -p volumes listwatchers rbd_header.13c3277850f538
  16. watcher=10.202.0.21:0/1043073 client.1298745 cookie=140256077850496

我们登录控制节点,查看对应的 cinder 卷:

  1. root@controller:~# cinder list | grep ee0c4077-a607-4bc9-a8cf-e893837361f3
  2. | ee0c4077-a607-4bc9-a8cf-e893837361f3 | in-use | | 1 | - | true | 96ee1aad-af27-4c9d-968f-291dbb2766a1 |

该卷挂载在 ID 为 96ee1aad-af27-4c9d-968f-291dbb2766a1 的虚拟机上。通过 nova show 命令验证该虚拟机是否在物理机 10.202.0.21 ( computer21 )上:

  1. root@controller:~# nova show 96ee1aad-af27-4c9d-968f-291dbb2766a1
  2. +--------------------------------------+---------------------------------------------------------------------------------+
  3. | Property | Value |
  4. +--------------------------------------+---------------------------------------------------------------------------------+
  5. | OS-DCF:diskConfig | AUTO |
  6. | OS-EXT-AZ:availability_zone | az_ip |
  7. | OS-EXT-SRV-ATTR:host | computer21 |
  8. | OS-EXT-SRV-ATTR:hostname | byp-volume |
  9. | OS-EXT-SRV-ATTR:hypervisor_hostname | computer21 |
  10. | OS-EXT-SRV-ATTR:instance_name | instance-00000989 |
  11. | OS-EXT-SRV-ATTR:kernel_id | |
  12. | OS-EXT-SRV-ATTR:launch_index | 0 |
  13. | OS-EXT-SRV-ATTR:ramdisk_id | |
  14. | OS-EXT-SRV-ATTR:reservation_id | r-hwiyx15c |
  15. | OS-EXT-SRV-ATTR:root_device_name | /dev/vda |
  16. | OS-EXT-SRV-ATTR:user_data | - |
  17. | OS-EXT-STS:power_state | 1 |
  18. | OS-EXT-STS:task_state | - |
  19. | OS-EXT-STS:vm_state | active |
  20. | OS-SRV-USG:launched_at | 2016-11-09T08:19:41.000000 |
  21. | OS-SRV-USG:terminated_at | - |
  22. | accessIPv4 | |
  23. | accessIPv6 | |
  24. | blue-net network | 192.168.50.27 |
  25. | config_drive | |
  26. | created | 2016-11-09T07:53:59Z |
  27. | description | byp-volume |
  28. | flavor | m1.small (2) |
  29. | hostId | 5104ee1a0538048d6ef80b14563a0cbac461f86523c5c81f5d18069e |
  30. | host_status | UP |
  31. | id | 96ee1aad-af27-4c9d-968f-291dbb2766a1 |
  32. | image | Attempt to boot from volume - no image supplied |
  33. | key_name | octavia_ssh_key |
  34. | locked | False |
  35. | metadata | {} |
  36. | name | byp-volume |
  37. | os-extended-volumes:volumes_attached | [{"id": "ee0c4077-a607-4bc9-a8cf-e893837361f3", "delete_on_termination": true}] |
  38. | progress | 0 |
  39. | security_groups | default |
  40. | status | ACTIVE |
  41. | tenant_id | f21a9c86d7114bf99c711f4874d80474 |
  42. | updated | 2016-11-09T08:19:41Z |
  43. | user_id | 142d8663efce464c89811c63e45bd82e |
  44. | zq-test network | 192.168.6.6 |
  45. +--------------------------------------+---------------------------------------------------------------------------------+