ceph-bluestore-tool工具的简介
[root@test-1 /]# ceph-bluestore-tool --help
All options:
Options:
-h [ --help ] produce help message
--path arg bluestore path //osd的路径
--out-dir arg output directory //导出时候的目录,比如bluefs-export
-l [ --log-file ] arg log file //log文件的位置,很多command就是调用bluestore.cc中的函数,其中会打印很多log
--log-level arg log level (30=most, 20=lots, 10=some, 1=little)// log的等级
--dev arg device(s) //可以是blockdev,dbdev,waldev
--deep arg deep fsck (read all data)
-k [ --key ] arg label metadata key name
-v [ --value ] arg label metadata value
Positional options:
--command arg fsck, repair, bluefs-export, bluefs-bdev-sizes,
bluefs-bdev-expand, show-label, set-label-key,
rm-label-key, prime-osd-dir
1.fsck 需要停止osd后使用
osd的元数据一致性检测(),deep为1的时候也检测对象数据.调用的bluestore::_fsck(bool deep, bool repair)
[root@test-1 ~]# ceph-bluestore-tool fsck --path /var/lib/ceph/osd/ceph-0 --deep 1
fsck success
2.repair
一致性检测并修复osd.
[root@test-1 ~]# ceph-bluestore-tool repair --path /var/lib/ceph/osd/ceph-0 --deep 1
repair success
3.bluefs-export
把rocksdb导出成文件系统的形式。
本身bluestore通过bluefs作为RocksEnv来进行底层的io,是看不到这些以目录形式组织的rocksdb内容,此工具提供了一种导出成目录的方法.
[root@test-1 ~]# ceph-bluestore-tool bluefs-export --path /var/lib/ceph/osd/ceph-0 --out-dir /home/osd-0/
infering bluefs devices from bluestore path
slot 0 /var/lib/ceph/osd/ceph-0/block.wal
slot 1 /var/lib/ceph/osd/ceph-0/block.db
slot 2 /var/lib/ceph/osd/ceph-0/block
db/
db/000139.sst
db/CURRENT
db/IDENTITY
db/LOCK
db/MANIFEST-000147
db/OPTIONS-000147
db/OPTIONS-000150
db.slow/
db.wal/
db.wal/000148.log
[root@test-1 osd-0]# tree
.
├── db
│ ├── 000139.sst
│ ├── CURRENT
│ ├── IDENTITY
│ ├── LOCK
│ ├── MANIFEST-000147
│ ├── OPTIONS-000147
│ └── OPTIONS-000150
├── db.slow
└── db.wal
└── 000148.log
3 directories, 8 files
4.bluefs-bdev-sizes
[root@test-1 ~]# ceph-bluestore-tool bluefs-bdev-sizes --path /var/lib/ceph/osd/ceph-0/
infering bluefs devices from bluestore path
slot 0 /var/lib/ceph/osd/ceph-0//block.wal
slot 1 /var/lib/ceph/osd/ceph-0//block.db
slot 2 /var/lib/ceph/osd/ceph-0//block
0 : size 0x780000000 : own 0x[1000~77ffff000]
1 : size 0x780000000 : own 0x[2000~77fffe000]
2 : size 0x4affc00000 : own 0x[23ffe00000~300000000]
5.bluefs-bdev-expand
如果更换了dev,可以用这个命令进行扩容(db扩容).
[root@test-1 ~]# ceph-bluestore-tool bluefs-bdev-expand --path /var/lib/ceph/osd/ceph-0/
infering bluefs devices from bluestore path
slot 0 /var/lib/ceph/osd/ceph-0//block.wal
slot 1 /var/lib/ceph/osd/ceph-0//block.db
slot 2 /var/lib/ceph/osd/ceph-0//block
start:
0 : size 0x780000000 : own 0x[1000~77ffff000]
1 : size 0x780000000 : own 0x[2000~77fffe000]
2 : size 0x4affc00000 : own 0x[23ffe00000~300000000]
6.show-label
显示dev或者path的一些标签.
[root@test-1 ~]# ceph-bluestore-tool show-label --path /var/lib/ceph/osd/ceph-0/
infering bluefs devices from bluestore path
{
"/var/lib/ceph/osd/ceph-0//block": {
"osd_uuid": "8b0394e4-1dcc-44c1-82b7-864b2162de38",
"size": 322118352896,
"btime": "2018-10-08 10:26:39.252910",
"description": "main",
"bluefs": "1",
"ceph_fsid": "acc6dc6a-79cd-45dc-bf1f-83a576eb8039",
"kv_backend": "rocksdb",
"magic": "ceph osd volume v026",
"mkfs_done": "yes",
"osd_key": "AQBcwLpbGh89JRAAoEbi/OgMvKABkZmI9r/B8g==",
"ready": "ready",
"whoami": "0"
},
"/var/lib/ceph/osd/ceph-0//block.wal": {
"osd_uuid": "8b0394e4-1dcc-44c1-82b7-864b2162de38",
"size": 32212254720,
"btime": "2018-10-08 10:26:39.285854",
"description": "bluefs wal"
},
"/var/lib/ceph/osd/ceph-0//block.db": {
"osd_uuid": "8b0394e4-1dcc-44c1-82b7-864b2162de38",
"size": 32212254720,
"btime": "2018-10-08 10:26:39.255250",
"description": "bluefs db"
}
}
[root@test-1 ~]# ceph-bluestore-tool show-label --dev /dev/ceph-e7878472-0d23-42a4-a9be-d69edc9ed4b0/osd-block-8b0394e4-1dcc-44c1-82b7-864b2162de38
{
"/dev/ceph-e7878472-0d23-42a4-a9be-d69edc9ed4b0/osd-block-8b0394e4-1dcc-44c1-82b7-864b2162de38": {
"osd_uuid": "8b0394e4-1dcc-44c1-82b7-864b2162de38",
"size": 322118352896,
"btime": "2018-10-08 10:26:39.252910",
"description": "main",
"bluefs": "1",
"ceph_fsid": "acc6dc6a-79cd-45dc-bf1f-83a576eb8039",
"kv_backend": "rocksdb",
"magic": "ceph osd volume v026",
"mkfs_done": "yes",
"osd_key": "AQBcwLpbGh89JRAAoEbi/OgMvKABkZmI9r/B8g==",
"ready": "ready",
"whoami": "0"
}
}
[root@test-1 ~]# ceph-bluestore-tool show-label --dev /dev/vde
vde vde1 vde2 vde3 vde4 vde5 vde6
[root@test-1 ~]# ceph-bluestore-tool show-label --dev /dev/vde2
{
"/dev/vde2": {
"osd_uuid": "8b0394e4-1dcc-44c1-82b7-864b2162de38",
"size": 32212254720,
"btime": "2018-10-08 10:26:39.255250",
"description": "bluefs db"
}
}
[root@test-1 ~]# ceph-bluestore-tool show-label --dev /dev/vde1
{
"/dev/vde1": {
"osd_uuid": "8b0394e4-1dcc-44c1-82b7-864b2162de38",
"size": 32212254720,
"btime": "2018-10-08 10:26:39.285854",
"description": "bluefs wal"
}
}
7.set/rm-label-key
插入或者删除一些标签.
[root@test-1 ~]# ceph-bluestore-tool set-label-key -k aaa -v bbb --dev /dev/vde1
[root@test-1 ~]# ceph-bluestore-tool show-label --dev /dev/vde1
{
"/dev/vde1": {
"osd_uuid": "8b0394e4-1dcc-44c1-82b7-864b2162de38",
"size": 32212254720,
"btime": "2018-10-08 10:26:39.285854",
"description": "bluefs wal",
"aaa": "bbb"
}
}
[root@test-1 ~]# ceph-bluestore-tool rm-label-key -k aaa --dev /dev/vde1
[root@test-1 ~]# ceph-bluestore-tool show-label --dev /dev/vde1
{
"/dev/vde1": {
"osd_uuid": "8b0394e4-1dcc-44c1-82b7-864b2162de38",
"size": 32212254720,
"btime": "2018-10-08 10:26:39.285854",
"description": "bluefs wal"
}
}
目前还没遇到fsck和repair的使用场景,后续遇到了会更新,show-lable用于查看osd的一些信息时有用,尤其是当osd已经umount掉了的时候.