cephadm快速部署指定版本ceph集群_ggrong0213的博客-CSDN博客
主机名 | IP | 组件 |
ceph1 | 192.168.150.120 | ceph-deploy,mon,mgr,osd |
ceph2 | 192.168.150.121 | mon,mgr,osd |
ceph3 | 192.168.150.122 | mon,mgr,osd |
centos7
ceph15.2.13
$ yum install epel-release -y
$ systemctl stop firewalld
$ systemctl disable firewalld
$ systemctl status firewalld
配置永久静态主机名:
$ hostnamectl --static set-hostname HostName
修改域名解析文件:
$ vi /etc/hosts
# 添加如下内容:
ceph1Ip ceph1
ceph2Ip ceph2
ceph3Ip ceph3
安装NTP服务:
$ yum -y install ntp ntpdate
备份旧配置:
$ cd /etc && mv ntp.conf ntp.conf.bak
以ceph1为NTP服务端节点,在ceph1新建NTP文件:
$ vi /etc/ntp.conf
# 添加如下内容:
restrict 127.0.0.1
restrict ::1
restrict ceph1IP mask 255.255.255.0
server 127.127.1.0
fudge 127.127.1.0 stratum 8
在ceph2、ceph3节点新建NTP文件 :
$ vi /etc/ntp.conf
# 添加如下内容:
server ceph1Ip
在ceph1节点启动NTP服务:
$ systemctl start ntpd
$ systemctl enable ntpd
$ systemctl status ntpd
在除ceph1的所有节点强制同步server(ceph1)时间:
$ ntpdate ceph1
在除ceph1的所有节点写入硬件时钟,避免重启后失效 :
$ hwclock -w
在除ceph1的所有节点安装并启动crontab工具 :
$ yum install -y crontabs
$ chkconfig crond on
$ systemctl start crond
$ crontab -e
# 添加如下内容:
*/10 * * * * /usr/sbin/ntpdate ceph1Ip
在ceph1节点生成公钥,并发放到各个主机:
$ ssh-keygen -t rsa
$ for i in {1..3}; do ssh-copy-id ceph$i; done
$ setenforce 0
$ vi /etc/selinux/config
# 修改为disabled
$ vi /etc/yum.repos.d/ceph.repo
#添加如下内容:
[Ceph]
name=Ceph packages for $basearch
baseurl=https://download.ceph.com/rpm-15.2.13/el7/$basearch
enabled=1
gpgcheck=0
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
priority=1
[Ceph-noarch]
name=Ceph noarch packages
baseurl=https://download.ceph.com/rpm-15.2.13/el7/noarch
enabled=1
gpgcheck=0
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
priority=1
[ceph-source]
name=Ceph source packages
baseurl=https://download.ceph.com/rpm-15.2.13/el7/SRPMS
enabled=1
gpgcheck=0
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
priority=1
更新yum源:
$ yum clean all && yum makecache
在所有主机安装Ceph:
$ yum -y install ceph
验证是否安装成功:
$ ceph -v
ceph version 15.2.13 (c44bc49e7a57a87d84dfff2a077a2058aa2172e2) octopus (stable)
$ yum -y install ceph-deploy
$ cd /etc/ceph
$ ceph-deploy new ceph1 ceph2 ceph3
若出现:
Traceback (most recent call last):
File "/usr/bin/ceph-deploy", line 18, in <module>
from ceph_deploy.cli import main
File "/usr/lib/python2.7/site-packages/ceph_deploy/cli.py", line 1, in <module>
import pkg_resources
ImportError: No module named pkg_resources
解决办法:
$ yum install -y wget
$ wget https://pypi.python.org/packages/source/d/distribute/distribute-0.7.3.zip --no-check-certificate
$ yum install -y unzip
$ unzip distribute-0.7.3.zip
$ cd distribute-0.7.3
$ python setup.py install
$ vi /etc/ceph/ceph.conf
# 添加如下内容:
[global]
fsid = f6b3c38c-7241-44b3-b433-52e276dd53c6
mon_initial_members = ceph1, ceph2, ceph3
mon_host = ceph1IP,ceph2IP,ceph3IP
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
public_network = ceph1子网IP/24
[mon]
mon_allow_pool_delete = true
$ ceph-deploy mon create-initial
$ ceph-deploy --overwrite-conf admin ceph1 ceph2 ceph3
6、5 查看mon是否部署成功:
$ ceph -s
cluster:
id: 8b4fd85a-14c4-4498-a866-30752083647d
health: HEALTH_WARN
mons are allowing insecure global_id reclaim
services:
mon: 3 daemons, quorum ceph1,ceph2,ceph3 (age 87s)
mgr: no daemons active
osd: 0 osds: 0 up, 0 in
data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 B
usage: 0 B used, 0 B / 0 B avail
pgs:
可以看到mon节点已经部署成功。
$ ceph-deploy mgr create ceph1 ceph2 ceph3
查看是否mgr部署成功:
$ ceph -s
cluster:
id: 8b4fd85a-14c4-4498-a866-30752083647d
health: HEALTH_WARN
mons are allowing insecure global_id reclaim
services:
mon: 3 daemons, quorum ceph1,ceph2,ceph3 (age 2m)
mgr: ceph1(active, since 4s), standbys: ceph3, ceph2
osd: 0 osds: 0 up, 0 in
data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 B
usage: 0 B used, 0 B / 0 B avail
pgs:
可以看到mgr节点已经部署成功。
确认各个节点各硬盘的sd*:
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 20G 0 disk
├─sda1 8:1 0 1G 0 part /boot
└─sda2 8:2 0 19G 0 part
├─centos-root 253:0 0 17G 0 lvm /
└─centos-swap 253:1 0 2G 0 lvm [SWAP]
sdb 8:16 0 10G 0 disk
sdc 8:32 0 10G 0 disk
sdd 8:48 0 10G 0 disk
sr0 11:0 1 4.4G 0 rom
部署osd节点:
$ ceph-deploy osd create ceph1 --data /dev/sdb
$ ceph-deploy osd create ceph1 --data /dev/sdc
$ ceph-deploy osd create ceph1 --data /dev/sdd
$ ceph-deploy osd create ceph2 --data /dev/sdb
$ ceph-deploy osd create ceph2 --data /dev/sdc
$ ceph-deploy osd create ceph2 --data /dev/sdd
$ ceph-deploy osd create ceph3 --data /dev/sdb
$ ceph-deploy osd create ceph3 --data /dev/sdc
$ ceph-deploy osd create ceph3 --data /dev/sdd
验证osd节点是否部署成功:
$ ceph -s
cluster:
id: 8b4fd85a-14c4-4498-a866-30752083647d
health: HEALTH_WARN
mons are allowing insecure global_id reclaim
Module 'restful' has failed dependency: No module named 'pecan'
services:
mon: 3 daemons, quorum ceph1,ceph2,ceph3 (age 11m)
mgr: ceph1(active, since 9m), standbys: ceph3, ceph2
osd: 9 osds: 9 up (since 6s), 9 in (since 6s)
task status:
data:
pools: 1 pools, 1 pgs
objects: 0 objects, 0 B
usage: 9.1 GiB used, 81 GiB / 90 GiB avail
pgs: 1 active+clean
可以看到osd节点已经部署成功。
禁用不安全模式:
$ ceph config set mon auth_allow_insecure_global_id_reclaim false
$ pip3 install pecan
再次查看集群状态:
$ ceph -s
cluster:
id: 8b4fd85a-14c4-4498-a866-30752083647d
health: HEALTH_OK
services:
mon: 3 daemons, quorum ceph1,ceph2,ceph3 (age 3m)
mgr: ceph1(active, since 3m), standbys: ceph3, ceph2
osd: 9 osds: 9 up (since 10m), 9 in (since 10m)
data:
pools: 1 pools, 1 pgs
objects: 0 objects, 0 B
usage: 9.1 GiB used, 81 GiB / 90 GiB avail
pgs: 1 active+clean
可以看到集群已经处于健康状态了。
$ ceph osd pool create vdbench 250 250
$ ceph osd pool application enable vdbench rbd
$ ceph osd pool set vdbench compression_algorithm zlib
$ ceph osd pool set vdbench compression_mode force
$ ceph osd pool set vdbench compression_required_ratio .99
$ rbd create image1 --size 20G --pool vdbench --image-format 2 --image-feature layering
rbd map vdbench/image1
$ dd if=/dev/zero of=/home/compress_test bs=1M count=100
$ dd if=/home/compress_test of=/dev/rbd0 bs=1M count=100 oflag=direct
$ ceph df
--- RAW STORAGE ---
CLASS SIZE AVAIL USED RAW USED %RAW USED
hdd 90 GiB 80 GiB 684 MiB 9.7 GiB 10.75
TOTAL 90 GiB 80 GiB 684 MiB 9.7 GiB 10.75
--- POOLS ---
POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL
device_health_metrics 1 1 0 B 0 0 B 0 25 GiB
vdbench 2 151 100 MiB 29 150 MiB 0.19 25 GiB