ceph 13版本:使用ceph-deploy(2.0.1) 部署ceph mimic(13.2.10)版本三节点集群
同事联系需要一个ceph集群。安排了三台机器部署一下集群,版本没有特殊需求,就安装了mimic版本进行实验。
准备了三台服务器:
各插了1块12T的盘作为osd数据盘
操作系统的镜像是:
CentOS-7-x86_64-Minimal-2009.iso
安装基础软件包
#安装软件包
yum install tree nmap ntpd dos2unix lrzsz lsof wget tcpdump htop iftop iotop sysstat nethogs -y;
yum install psmisc net-tools bash-completion vim-enhanced -y;
yum install -y vim pciutils traceroute unzip zip expect yum-utils epel-release tar telnet;
#更新打补丁
yum update -y;
#关闭selinux
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux;
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config;
#关闭防火墙
systemctl stop firewalld.service;
systemctl disable firewalld.service;
#设置时区
timedatectl set-timezone "Asia/Shanghai";
hwclock;
#CPU升频
yum install cpupowerutils -y ;
cpupower frequency-set -g performance;
#备份网卡配置文件
mkdir -p /etc/sysconfig/network-scripts/bak
cp /etc/sysconfig/network-scripts/ifcfg-* /etc/sysconfig/network-scripts/bak
#备份yum源文件
mkdir -p /etc/yum.repos.d/bak
cp /etc/yum.repos.d/*.repo /etc/yum.repos.d/bak
互信制作完毕,hosts文件准备完毕
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
***.16.31.19 node19
***.16.31.23 node23
***.16.31.24 node24
#=================================================================
#作用:安装pip环境
#=================================================================
#下载安装文件
wget https://bootstrap.pypa.io/pip/2.7/get-pip.py;
#安装pip
python get-pip.py;
pip install --upgrade remoto
安装remoto是防止出现“ AttributeError: ‘module’ object has no attribute ‘needs_ssh’”这个问题
#安装
yum install -y ceph-deploy
#创建部署文件夹
mkdir /etc/ceph-cluster/ && cd /etc/ceph-cluster/
# 新建集群,执行完命令之后会生成
# ceph.conf、
# ceph-deploy-ceph.log、
# ceph.mon.keyring三个文件
cd /etc/ceph-cluster/
ceph-deploy new --cluster-network ***.16.31.0/24 --public-network ***.16.31.0/24 node19 node23 node24
# 修改ceph.conf文件,将mon_initial_members的节点列表中的空格删除掉
# 修改前mon_initial_members = node19, node23, node24
# 修改后mon_initial_members = node19,node23,node24
# 安装ceph
ceph-deploy install node19 node23 node24
# 创建mon节点
ceph-deploy --overwrite-conf mon create node19 node23 node24
# 获取keyring文件,生成
# ceph.bootstrap-mds.keyring
# ceph.bootstrap-mgr.keyring
# ceph.bootstrap-osd.keyring
# ceph.bootstrap-rgw.keyring
# ceph.client.admin.keyring
ceph-deploy gatherkeys node19
# 分发keyring文件
# 将keyring文件分发到各个节点
ceph-deploy admin node19 node23 node24
# 创建mgr服务
ceph-deploy mgr create node19 node23 node24
# 创建mds服务
ceph-deploy mds create node19 node23 node24
ceph-deploy osd create --data /dev/sdb node19
ceph-deploy osd create --data /dev/sdb node23
ceph-deploy osd create --data /dev/sdb node24
ceph -s
ceph -s
cluster:
id: f78ce206-f940-4aef-9f46-2e5be5eaf221
health: HEALTH_OK
services:
mon: 3 daemons, quorum node19,node23,node24
mgr: node19(active), standbys: node24, node23
mds: cephfs-1/1/1 up {0=node24=up:active}, 2 up:standby
osd: 3 osds: 3 up, 3 in
data:
pools: 2 pools, 64 pgs
objects: 22 objects, 2.2 KiB
usage: 3.0 GiB used, 33 TiB / 33 TiB avail
pgs: 64 active+clean
当部署异常的时候,可以使用下面的purge命令删除掉集群信息重新部署
ceph-deploy purge node19 node23 node24
ceph-deploy purgedata node19 node23 node24
ceph-deploy forgetkeys
简单整理了一下,供以后部署时参考