当前位置: 首页 > 工具软件 > gpmall > 使用案例 >

gpmall集群构建二:zookeeper集群

秦俊
2023-12-01

规划节点

表4-6-1 节点规划

IP

主机名

节点

172.16.51.23

zookeeper1

集群节点

172.16.51.32

zookeeper2

集群节点

172.16.51.41

zookeeper3

集群节点

1. 基础环境配置

(1)主机名配置

使用secureCRT对3台云主机进行连接。

3个节点修改主机名为zookeeper1、zookeeper2、zookeeper3,命令如下:

zookeeper1节点:

[root@localhost ~]# hostnamectl set-hostname zookeeper1

zookeeper2节点:

[root@localhost ~]# hostnamectl set-hostname zookeeper2

zookeeper3节点:

[root@localhost ~]# hostnamectl set-hostname zookeeper3

修改完之后重新连接secureCRT,并查看主机名:

zookeeper1节点:

[root@zookeeper1 ~]# hostnamectl

   Static hostname: zookeeper1

         Icon name: computer-vm

           Chassis: vm

        Machine ID: dae72fe0cc064eb0b7797f25bfaf69df

           Boot ID: c642ea4be7d349d0a929e557f23ce3dc

    Virtualization: kvm

  Operating System: CentOS Linux 7 (Core)

       CPE OS Name: cpe:/o:centos:centos:7

            Kernel: Linux 3.10.0-229.el7.x86_64

      Architecture: x86_64

zookeeper2节点:

[root@zookeeper2 ~]# hostnamectl

   Static hostname: zookeeper2

         Icon name: computer-vm

           Chassis: vm

        Machine ID: dae72fe0cc064eb0b7797f25bfaf69df

           Boot ID: cfcaf92af7a44028a098dc4792b441f4

    Virtualization: kvm

  Operating System: CentOS Linux 7 (Core)

       CPE OS Name: cpe:/o:centos:centos:7

            Kernel: Linux 3.10.0-229.el7.x86_64

      Architecture: x86_64

zookeeper3节点:

[root@zookeeper3 ~]# hostnamectl

   Static hostname: zookeeper3

         Icon name: computer-vm

           Chassis: vm

        Machine ID: dae72fe0cc064eb0b7797f25bfaf69df

           Boot ID: cff5bbd45243451e88d14e1ec75098c0

    Virtualization: kvm

  Operating System: CentOS Linux 7 (Core)

       CPE OS Name: cpe:/o:centos:centos:7

            Kernel: Linux 3.10.0-229.el7.x86_64

      Architecture: x86_64

(2)配置hosts文件

3个节点修改/etc/hosts文件,3个节点均修改成如下代码所示:

# vi /etc/hosts

172.16.51.23 zookeeper1

172.16.51.32 zookeeper2

172.16.51.41 zookeeper3

(3)配置YUM源

将提供的gpmall-repo目录上传至3个节点的/opt目录下,首先将3个节点/etc/yum.repo.d目录下的文件移动到/media目录下,命令如下:

# mv /etc/yum.repos.d/* /media/

在3个节点上创建/etc/yum.repo.d/local.repo,文件内容如下:

# cat /etc/yum.repos.d/local.repo

[gpmall]

name=gpmall

baseurl=file:///opt/gpmall-repo

gpgcheck=0

enabled=1

# yum clean all

# yum list

2. 搭建ZooKeeper集群

(1)安装JDK环境

3个节点安装Java JDK环境,3个节点均执行命令如下:

# yum install -y java-1.8.0-openjdk java-1.8.0-openjdk-devel

# java -version

openjdk version "1.8.0_222"

OpenJDK Runtime Environment (build 1.8.0_222-b10)

OpenJDK 64-Bit Server VM (build 25.222-b10, mixed mode)

(2)解压ZooKeeper软件包

将zookeeper-3.4.14.tar.gz软件包上传至3个节点的/root目录下,进行解压操作,3个节点均执行命令如下:

# tar -zxvf zookeeper-3.4.14.tar.gz

(3)修改3个节点配置文件

在zookeeper1节点,进入zookeeper-3.4.14/conf目录下,修改zoo_sample.cfg文件为zoo.cfg,并编辑该文件内容如下:

[root@zookeeper1 conf]# vi zoo.cfg

[root@zookeeper1 conf]# grep -n '^'[a-Z] zoo.cfg

2:tickTime=2000

5:initLimit=10

8:syncLimit=5

12:dataDir=/tmp/zookeeper

14:clientPort=2181

29:server.1=172.16.51.23:2888:3888

30:server.2=172.16.51.32:2888:3888

31:server.3=172.16.51.41:2888:3888

[root@zookeeper1 ~]# mkdir /tmp/zookeeper

[root@zookeeper1 ~]# vi /tmp/zookeeper/myid

[root@zookeeper1 ~]# cat /tmp/zookeeper/myid

1

zookeeper2节点:

[root@zookeeper2 ~]# mkdir /tmp/zookeeper

[root@zookeeper2 ~]# vi /tmp/zookeeper/myid

[root@zookeeper2 ~]# cat /tmp/zookeeper/myid

2

zookeeper3节点:

[root@zookeeper3 ~]# mkdir /tmp/zookeeper

[root@zookeeper3 ~]# vi /tmp/zookeeper/myid

[root@zookeeper3 ~]# cat /tmp/zookeeper/myid

3

(5)启动ZooKeeper服务

在3台机器的zookeeper-3.4.14/bin目录下执行命令如下:

zookeeper1节点:

[root@zookeeper1 bin]# ./zkServer.sh start

ZooKeeper JMX enabled by default

Using config: /root/zookeeper-3.4.14/bin/../conf/zoo.cfg

Starting zookeeper ... STARTED

[root@zookeeper1 bin]#./zkServer.sh status

ZooKeeper JMX enabled by default

Using config: /root/zookeeper-3.4.14/bin/../conf/zoo.cfg

Mode: follower

zookeeper2节点:

[root@zookeeper2 bin]# ./zkServer.sh start

ZooKeeper JMX enabled by default

Using config: /root/zookeeper-3.4.14/bin/../conf/zoo.cfg

Starting zookeeper ... already running as process 10175.

[root@zookeeper2 bin]# ./zkServer.sh status

ZooKeeper JMX enabled by default

Using config: /root/zookeeper-3.4.14/bin/../conf/zoo.cfg

Mode: leader

zookeeper3节点:

[root@zookeeper3 bin]# ./zkServer.sh start

ZooKeeper JMX enabled by default

Using config: /root/zookeeper-3.4.14/bin/../conf/zoo.cfg

Starting zookeeper ... STARTED

[root@zookeeper3 bin]# ./zkServer.sh status

ZooKeeper JMX enabled by default

Using config: /root/zookeeper-3.4.14/bin/../conf/zoo.cfg

Mode: follower

可以看到,3个节点,zookeeper2为leader,其他的都是follower。

至此,ZooKeeper集群配置完毕。

注意:查看状态出现问题时,所有节点都启动一下,再查看状态。

 类似资料: