当前位置: 首页 > 工具软件 > heketi > 使用案例 >

Glusterfs + heketi使用

李利
2023-12-01

Glusterfs

拉取centos7镜像,安装glusterfs

yum -y install centos-release-gluster
yum -y install glusterfs glusterfs-server glusterfs-fuse

docker commit 容器 新镜像名:version

用该镜像创建4个容器,node21,node22,node23,node24

docker run --name nodexx  --privileged=true  -itd  mycentos7-gluster:1.1 /usr/sbin/init

安装启动gluster

4个容器均执行

systemctl start glusterd
systemctl enable glusterd

进入node21容器


gluster peer probe node22
gluster peer probe node23
gluster peer probe node24

查看集群状态:


[root@49731b96c60d gfs-share]# gluster peer status
Number of Peers: 3

Hostname: node22
Uuid: 766c0f42-cea4-485d-ae50-868dee148ed0
State: Peer in Cluster (Connected)

Hostname: node23
Uuid: 85a5a829-23ec-4372-b5dc-28082f72aa6d
State: Peer in Cluster (Connected)

Hostname: node24
Uuid: 1e1be778-c94d-4275-9207-06cc63341e36
State: Peer in Cluster (Connected)

创建分布式卷

[root@49731b96c60d gfs-share]# gluster volume create app-data replica 2 node21:/opt/brick node22:/opt/brick force

volume create: app-data: success: please start the volume to access data

[root@49731b96c60d gfs-share]# gluster volume list
app-data

启动卷

[root@49731b96c60d gfs-share]# gluster volume start app-data
volume start: app-data: success

查看卷信息

[root@49731b96c60d gfs-share]#  gluster volume info app-data
 
Volume Name: app-data
Type: Replicate
Volume ID: c43030a7-2288-4445-b615-a740d787490c
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: node22:/opt/brick
Brick2: node23:/opt/brick
Options Reconfigured:
features.quota-deem-statfs: on
features.inode-quota: on
features.quota: on
transport.address-family: inet
storage.fips-mode-rchecksum: on
nfs.disable: on
performance.client-io-threads: off

打开GlusterFs磁盘限额,此处限制大小是10G,也可以不用设置.

[root@49731b96c60d /]# gluster volume quota app-data enable
volume quota : success
[root@49731b96c60d /]# gluster volume quota app-data limit-usage / 10GB
volume quota : success

查看这个卷的信息:

[root@49731b96c60d /]# gluster volume status
Status of volume: app-data
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick node22:/opt/brick                     49152     0          Y       236  
Brick node23:/opt/brick                     49152     0          Y       183  
Self-heal Daemon on localhost               N/A       N/A        Y       316  
Quota Daemon on localhost                   N/A       N/A        Y       354  
Self-heal Daemon on node22                  N/A       N/A        Y       257  
Quota Daemon on node22                      N/A       N/A        Y       285  
Self-heal Daemon on node23                  N/A       N/A        Y       204  
Quota Daemon on node23                      N/A       N/A        Y       231  
Self-heal Daemon on node24                  N/A       N/A        Y       152  
Quota Daemon on node24                      N/A       N/A        Y       176  
 
Task Status of Volume app-data
------------------------------------------------------------------------------
There are no active volume tasks

客户端使用卷

Glusterfs client端有三种客户端使用方式:Native mount,NFS,Samba

—此处使用Native mount挂载gluster volume 到node21和node22节点的本地目录/gfs-share下:

进入node21容器

mkdir /gfs-share
mount -t glusterfs node21:app-data /gfs-share

cd /gfs-share
touct t.txt

进入node22中可在/opt/brick目录下发现该文件

brick文件再平衡

扩容或者缩容后将文件重新分布

#开启均衡
gluster v rebalance VOL_NAME start
#此命令会考虑容量问题,如果目标容量小于当前容量则不迁移。可强制迁移。

gluster v rebalance VOL_NAME start force

扩容缩容

扩容

添加节点

gluster peer probe node25 
gluster peer probe node26

加入卷

gluster volume add-brick app-data node25:/opt/brick node26:/opt/brick force

加入后需要再平衡文件分布

gluster v rebalance app-data start

缩容

# 执行数据迁移,将移除brick上的数据迁移到其他brick
gluster volume remove-brick <volume-name> start

# 查看数据迁移状态
gluster volume remove-brick <volume-name> status

# 数据迁移完成后,要提交移除命令,删除brick
gluster volume remove-brick <volume-name> commit

commit后文件会自动平衡,将删除卷的文件移动到剩余的节点上

Heketi

yum install -y heketi heketi-client

cluster

cluster 一旦创建就难以删除,目前还没找到可以成功删除的条件
heketi-cli topology load --json=/etc/heketi/g.json

创建集群时gfs节点直接必须相互加入信任池中,否则会创建失败

volume

卷大小不能小于1G,默认3副本
heketi-cli volume create --size=100

SSH

yum install -y openssl openssh-server openssh-clients

用vim打开配置文件/etc/ssh/sshd_config

PermitRootLogin,PubkeyAuthentication的设置打开。

创建秘钥
ssh-keygen -t rsa
~/.ssh 将会出现两个秘钥文件,秘钥所属者属于当前登录用户
id_rsa id_rsa.pub
进入~/.ssh 目录

cp id_rsa.pub authorized_keys

A机器有id_rsa id_rsa.pub,B机器有authorized_keys,A就可以直接ssh登录B

heketi管理gluster常见问题

1、创建cluster失败
1.1 Unable to create node: New Node doesn’t have glusterd runnind
可能原因:1 gluster确实没起来
2 heketi 使用的ssh配置有问题,比如keyfile或者user错误
1.2 failed: node1 is either already part of another cluster or having volumes configured
gluster节点可能未关闭防火墙导致相互之间端口被挡住,可直接关闭防火墙或者放开端口限制

 类似资料: