命令 | 描述 | ||
ceph-deploy new [mon-node ...] | 指定node(s)为monitor,开始部署一个新的ceph集群,并且在当前目录创建ceph.conf和keyring文件,一共创建了3个文件:ceph.conf、ceph-deploy-ceph.log 和 ceph.mon.keyring. 生成的ceph.conf 文件里包含了命令行的monitor参数,内容为mon_initial_members=[monitor] | ||
ceph-deploy install [host …] | 在指定的远程host(admin-node/osd-node/mon-node)上安装Ceph相关的包,如果安装找不到stable 的 ceph 版本,需要手动修改admin-node节点上的/etc/apt/sources.list.d/ceph.list文件 | ||
ceph-deploy mon [CMD …] | 进行ceph monitor daemon 管理 | ||
ceph-deploy mon create-initial | 部署并初始化已经定义好的mon_initial_members=[monitor]成员,并等待他们形成仲裁,然后收集密钥keys,会在当前目录下生成几个key,报告进程中monitor的状态。如果monitor不构成仲裁,命令最终将超时。 | ||
ceph-deploy osd | 使用路径,先prepare 后 activate | prepare [node:/dir …] | 从管理节点(admin-node)执行 ceph-deploy 来准备 OSD 。参数可以是文件或者磁盘,例如,文件:ceph-deploy osd prepare node1:/var/local/osd0 磁盘:ceph-deploy osd prepare node1:sdb1:sdc |
activate [node:/dir …] | 激活已经准备好的 OSD。 参数可以是文件或者磁盘,例如,文件:ceph-deploy osd activate node1:/var/local/osd0 磁盘:ceph-deploy osd activate node1:sdb1:sdc | ||
使用磁盘或者日志,用 `create` 参数将 自动完成prepare 和 activate | create | 例1:使用filestore采用journal模式(每个节点数据盘需要两块盘或两个分区) 1)创建逻辑卷. vgcreate data /dev/sdb lvcreate --size 00G --name log data 2)创建OSD ceph-deploy osd create --filestore --fs-type xfs --data /dev/sdc --journal data/log storage1 例2:使用bluestore 1)创建逻辑卷 vgcreate cache /dev/sdb lvcreate --size 100G --name db-lv-0 cache vgcreate cache /dev/sdb lvcreate --size 100G --name wal-lv-0 cache 2)创建OSD(--block-db 对应配置参数:bluestore_block_db_path;--block-wal对应配置参数bluestore_block_wal_path) ceph-deploy osd create --bluestore storage1 --data /dev/sdc --block-db cache/db-lv-0 --block-wal cache/wal-lv-0 | |
ceph-deploy admin [admin- node] [osd-node mon-node …] | 把配置文件和 admin 密钥拷贝到管理节点(admin-node)和 Ceph 节点(monitor/osd)的/etc/ceph/目录下 | ||
ceph-deploy mgr create [node] | 指定node为部署ceph-mgr节点的hostname,并开始mgr 部署。 | ||
ceph-deploy disk zap [osd-node]:[disk-name} | 擦除指定node的磁盘分区表及其内容,实际上它是调用sgdisk –zap-all来销毁GPT和MBR, 所以磁盘可以被重新分区。可以配合命令ceph-deploy osd prepare/active一起使用 | ||
ceph-deploy mds create {host-name}[:{daemon-name}][{host-name}[:{daemon-name}]...] | 例如:ceph-deploy --overwrite-conf mds create server1:mds-daemon-1,在server1上创建名为mds-daemon-1的mds后台进程,即server1为CephFS元数据服务器 | ||
ceph-deploy --overwrite-conf admin {mon} {osd} | 更新mon及osd的ceph.conf配置文件 重启服务才能生效 也可以重启指定的部分sudo systemctl restart ceph-mon.target ceph-osd.target |
命令 | 描述 |
ceph -s | 查看ceph 集群的状态 |
ceph df |
|
ceph osd tree |
|
ceph osd pool create {pool-name} {pg-num} [{pgp-num}] [replicated] [crush-rule-name] [expected-num-objects]
| 例如,ceph osd pool create rbd 256,创建名为rbd的ceph pool,等同于命令:rados mkpool pool 通常在创建pool之前,需要覆盖默认的
|
ceph osd pool create {pool-name} {pg-num} {pgp-num} erasure [erasure-code-profile] [crush-rule-name] [expected_num_objects]
| 同上,提供的是EC容错 |
ceph osd lspools | 查看集群的pools |
ceph osd pool ls detail | 查看pool的详细信息,例如:ceph> osd pool ls detail pool 14 'rbd' replicated size 1 min_size 1 crush_rule 0 object_hash rjenkins pg_num 128 pgp_num 128 last_change 125 flags hashpspool stripe_width 0 |
ceph osd pool set {pool-name} size {num} | 设置指定pool的副本数为{num}。例如,sudo ceph osd pool set rbd size 1 sudo ceph osd pool set rbd min_size 1 |
ceph osd pool get {pool-name} size
| 查看指定pool的副本数,例如:$ ceph osd pool get rbd size osd pool get <poolname> size|min_size|crash_replay_interval|pg_num|pgp_num|crush_rule|hashpspool|nodelete|nopgchange| get pool parameter <var> |
ceph osd pool delete {pool-name} {pool-name} --yes-i-really-really-mean-it | 删除某个pool,需要配置/etc/ceph/ceph.conf的[mon] mon allow pool delete = true。 然后重启ceph mon,命令:sudo systemctl restart ceph-mon.target 再删除命令,ceph osd pool delete rbd rbd --yes-i-really-really-mean-it |
ceph osd dump | grep 'replicated size' | 查看OSD日志,例如:$ceph osd dump | grep 'replicated size' pool 9 'rbd' replicated size 1 min_size 1 crush_rule 0 object_hash rjenkins pg_num 100 pgp_num 100 last_change 68 flags hashpspool stripe_width 0 |
ceph osd set noscrub | 设置不做scrub一致性检查,防止影响性能。设置不做深度scrub检查:ceph osd set nodeep-scrub |
ceph osd getcrushmap -o {compiled-crushmap-filename} | 将集群的CRUSH Map输出到指定文件,该文件是编译过的文件 |
crushtool -d {compiled-crushmap-filename} -o {decompiled-crushmap-filename} | 反编译集群里导出的 CRUSH Map 文件,反编译后为文本文件 |
crushtool -c {decompiled-crushmap-filename} -o {compiled-crushmap-filename} | 编译 CRUSH Map 文本文件 |
ceph -n {nodename.id} –show-config | 显示当前的ceph节点的配置,可以指定某个节点,例如: ceph -n osd.0 --show-config ceph -n mon.node1 --show-config |
ceph mds stat | 查看CephFS元数据服务器运行状态 |
ceph fs new <fs_name> <metadata_pool> <data_pool> | 用指定的元数据池和数据池新建一个CephFS文件系统 |
ceph fs ls | 查看cephfs列表,$ sudo ceph fs ls name: test_cephfs1, metadata pool: cephfs_metadata, data pools: [cephfs_data ] |
ceph daemon <mon /osd/mds >.< id > config set <参数名> <参数值> | 动态调整参数命令,只能在本地设置本地实例的参数 |
ceph tell <mon /osd/mds >.< id > injectargs '--<参数名> <参数值>' | 动态调整参数命令,可以通过'*'的方式设置所有的实例的参数 |
参考:http://docs.ceph.com/docs/master/rados/operations/pools/ |
命令 | 描述 |
ceph-conf --name mon.node1 --show-config-value log_file | 查看ceph monitor 的日志路径。例如:~$ ceph-conf --name mon.node1 --show-config-value log_file /var/log/ceph/ceph-mon.node1.log。其实monitor节点的/var/log/ceph/目录下有很多类型的log,ceph-mgr、ceph-mon、ceph、ceph-client、ceph.audit |
ceph-conf --name osd.0 --show-config-value log_file | 查看ceph osd 的日志路径 |
|
|
命令 | 描述 |
rados lspools | 查看ceph 集群的pool |
rados mkpool <pool-name> | 创建名为<pool-name>的ceph pool |
rados ls | 列出叫rbd的pool里的objects |
命令 | 描述 |
rbd create image1 --size 60G | 默认在rbd pool下创建一个名为image1, 大小为1G的image,等同于rbd create rbd/image1 --size 60G --image-format 2 |
rbd list | 列出所有的块设备image |
rbd info image1 | 查看某个具体的image的信息。例如: $ rbd info image1 rbd image 'image1': size 1GiB in 256 objects order 22 (4MiB objects) block_name_prefix: rbd_data.10356b8b4567 format: 2 features: layering, exclusive-lock, object-map, fast-diff, deep-flatten flags: create_timestamp: Wed Jun 19 16:05:40 2019 |
rbd feature disable image1 exclusive-lock, object-map, fast-diff, deep-flatten | 关掉image1的一些feature |
rbd map image1 | 把test_image块设备映射到操作系统,例如:h$ sudo rbd map image1 /dev/rbd0 |
rbd showmapped | 显示已经映射的块设备,例如:$ rbd showmapped id pool image snap device 0 rbd image1 - /dev/rbd0 |
rbd unmap image1 | 取消映射 |
rbd rm image | 删除一个rbd image |
后续会继续添加常用命令。。。
2. ceph -help 的manual
$ ceph-deploy -h
usage: ceph-deploy [-h] [-v | -q] [--version] [--username USERNAME]
[--overwrite-conf] [--cluster NAME] [--ceph-conf CEPH_CONF]
COMMAND ...
Easy Ceph deployment
-^-
/ \
|O o| ceph-deploy v1.5.38
).-.(
'/|||\`
| '|` |
'|`
Full documentation can be found at: http://ceph.com/ceph-deploy/docs
optional arguments:
-h, --help show this help message and exit
-v, --verbose be more verbose
-q, --quiet be less verbose
--version the current installed version of ceph-deploy
--username USERNAME the username to connect to the remote host
--overwrite-conf overwrite an existing conf file on remote host (if
present)
--cluster NAME name of the cluster
--ceph-conf CEPH_CONF
use (or reuse) a given ceph.conf file
commands:
COMMAND description
new Start deploying a new cluster, and write a
CLUSTER.conf and keyring for it.
install Install Ceph packages on remote hosts.
rgw Ceph RGW daemon management
mgr Ceph MGR daemon management
mon Ceph MON Daemon management
mds Ceph MDS daemon management
gatherkeys Gather authentication keys for provisioning new nodes.
disk Manage disks on a remote host.
osd Prepare a data disk on remote host.
admin Push configuration and client.admin key to a remote
host.
repo Repo definition management
config Copy ceph.conf to/from remote host(s)
uninstall Remove Ceph packages from remote hosts.
purge Remove Ceph packages from remote hosts and purge all
data.
purgedata Purge (delete, destroy, discard, shred) any Ceph data
from /var/lib/ceph
calamari Install and configure Calamari nodes. Assumes that a
repository with Calamari packages is already
configured. Refer to the docs for examples
(http://ceph.com/ceph-deploy/docs/conf.html)
forgetkeys Remove authentication keys from the local directory.
pkg Manage packages on remote hosts.