当前位置: 首页 > 工具软件 > keeper > 使用案例 >

clickHouse版本升级以及切换clickhouse keeper

司马耘豪
2023-12-01

clickHouse版本升级以及切换clickhouse keeper

前言

记录一次对clickhouse的版本升级工作,以及从zookeeper 迁移到clickhouse keeper的试验

准备工作

首先,选择clickhouse的版本

1.如果对版本有比较严格的要求,或者希望使用某种特性,可以去官网文档查看变更日志

https://clickhouse.com/docs/en/whats-new/changelog/

2.下载对应的版本文件,下载路径 https://packages.clickhouse.com/rpm/lts/ lts版本算是比较稳定的企业使用版本

clickhouse-common-static-dbg-22.3.9.19.x86_64.rpm

clickhouse-server-22.3.9.19.x86_64.rpm

clickhouse-client-22.3.9.19.x86_64.rpm

clickhouse-common-static-22.3.9.19.x86_64.rpm

3.备份文件(集群的每台机器都要备份)

3.1 备份数据文件

mkdir /opt/clickhouse_bak

cp -r /opt/clickhouse/* /opt/clickhouse_bak/

3.2 备份配置文件

mkdir /etc/clickhouse-server-bak

mkdir /etc/clickhouse-client-bak

cp -r /etc/clickhouse-client/* /etc/clickhouse-client-bak/

cp -r /etc/clickhouse-server/* /etc/clickhouse-server-bak/

注明:我这里的配置文件是指config.xml ,users.xml, shard.xml(分片配置),将自定义配置的地方,都给备份一次

3.3 关闭clickhouse服务

systemctl stop clickhouse-server.service

确定关闭完成,查看服务状态

systemctl status clickhouse-server.service

开始升级

进入安装包所在路径

cd /opt/clickhouse

执行rpm安装命令

rpm -Uvh *.rpm

重新安装完成,将备份的配置文件归位

rm -rf /ect/clickhouse-server/*

mv /etc/clickhouse-server-bak/* /ect/clickhouse-server/

给clickhouse用户授权

chown -R clickhouse:clickhouse /etc/clickhouse-server/

配置文件重新配置,不同版本有不同的调优方式文档

重启clickhouse

systemctl daemon-reload

systemctl start clickhouse-server.service

注明:如果是有副本的集群,可实现不停机平滑升级(已经验证查询功能能够正常支持,写功能需要有一定断路器策略)

比如 A,B互为副本一分片,C,D互为副本一分片,停机升级时,停机A,C,保留B,D正常支撑,A,C升级完成后,停机B,D完成升级。

zookeeper切换clickhouse keeper

1.准备配置文件

根据官网提示,在shard.xml(分片副本配置文件)新增配置 clickhouse_keeper_server,配置文件如下:

<yandex>
    <clickhouse_remote_servers>
        <msgdb_3shards_1replicas>

			<shard>
                <internal_replication>true</internal_replication>
                <weight>1</weight>
                <replica>
                        <host>node1</host>
                        <port>9000</port>
                </replica>
                <replica>
                        <host>node2</host>
                        <port>9001</port>
                </replica>
            </shard>
			<shard>
                <internal_replication>true</internal_replication>
                <weight>1</weight>
                <replica>
                        <host>node3</host>
                        <port>9003</port>
                </replica>
                <replica>
                        <host>node4</host>
                        <port>9000</port>
                </replica>
            </shard>
        </msgdb_3shards_1replicas>
    </clickhouse_remote_servers>

<macros>
    <shard>01</shard>
    <replica>01</replica>
</macros>

<clickhouse_keeper_server>
    <tcp_port>9181</tcp_port>
    <server_id>1</server_id>
    <log_storage_path>/opt/clickhouse/keeper/log</log_storage_path>
    <snapshot_storage_path>/opt/clickhouse/keeper/snapshots</snapshot_storage_path>

    <coordination_settings>
        <operation_timeout_ms>10000</operation_timeout_ms>
        <session_timeout_ms>30000</session_timeout_ms>
        <raft_logs_level>trace</raft_logs_level>
    </coordination_settings>

    <raft_configuration>
        <server>
            <id>1</id>
            <hostname>node1</hostname>
            <port>9444</port>
        </server>
        <server>
            <id>2</id>
            <hostname>node2</hostname>
            <port>9444</port>
        </server>
        <server>
            <id>3</id>
            <hostname>node3</hostname>
            <port>9444</port>
        </server>
          <server>
            <id>4</id>
            <hostname>node4</hostname>
            <port>9444</port>
        </server>
    </raft_configuration>
</clickhouse_keeper_server>
    <zookeeper>
        <node>
            <host>node1</host>
            <port>9181</port>
        </node>
        <node>
            <host>node2</host>
            <port>9181</port>
        </node>
        <node>
            <host>node3</host>
            <port>9181</port>
        </node>
		<node>
            <host>node4</host>
            <port>9181</port>
        </node>
    </zookeeper>
    <networks>
    <ip>::/0</ip>
    </networks>

    <clickhouse_compression>
        <case>
            <min_part_size>10000000000</min_part_size>
            <min_part_size_ratio>0.01</min_part_size_ratio>
            <method>lz4</method>
        </case>
    </clickhouse_compression>
</yandex>

config.xml 新增引入配置:

<include_from>/etc/clickhouse-server/config.d/shard.xml</include_from>
    <remote_servers incl="clickhouse_remote_servers"></remote_servers>
    <keeper_server incl="clickhouse_keeper_server"></keeper_server>

    <zookeeper incl="zookeeper" optional="true" />

    <!-- Substitutions for parameters of replicated tables.
          Optional. If you don't use replicated tables, you could omit that.

         See https://clickhouse.yandex/docs/en/table_engines/replication/#creating-replicated-tables
      -->
    <macros incl="macros" optional="true" />

zookeeper 数据迁移

迁移步骤如下

  • 停止所有zk节点
  • 找到zk leader节点
  • 重启zk leader节点,并再次停止(这一步是为了让leader节点生成一份snapshot)
  • 运行clickhouse-keeper-converter,生成keeper的snapshot文件

执行命令

/usr/bin/clickhouse-keeper-converter --zookeeper-logs-dir /opt/zookeeper-3.4.6/dataLogDir/version-2 --zookeeper-snapshots-dir /opt/zookeeper-3.4.6/dataDir/version-2 --output-dir /opt/clickhouse/keeper/snapshots/

注明:本例中,是使用clickhouse-keeper 与clickhouse节点一体的方式启动,如果需要单独配置clickhouse-keeper,需要用命令单独启动

参考:https://clickhouse.com/docs/en/operations/clickhouse-keeper

 类似资料: