当前位置: 首页 > 工具软件 > TiDB Binlog > 使用案例 >

TiDB Binlog部署

梁嘉澍
2023-12-01

TiDB Binlog部署

环境信息

中控机是10.152.x.133

[tidb_servers]
10.152.x.10
10.152.x.11
10.152.x.12

[tikv_servers]
10.152.x.10
10.152.x.11
10.152.x.12

[pd_servers]
10.152.x.10
10.152.x.11
10.152.x.12


## Monitoring Part
# prometheus and pushgateway servers
[monitoring_servers]
10.152.x.133

[grafana_servers]
10.152.x.133

# node_exporter and blackbox_exporter servers
[monitored_servers]
10.152.x.10
10.152.x.11
10.152.x.12
10.152.x.133

[alertmanager_servers]
10.152.x.133

先造点数据

yum install sysbench -y

mysql -uroot -p -h10.152.x.10 -P4000 -e"create database sysbench"

sysbench /usr/share/sysbench/oltp_read_write.lua  --mysql-user=root --mysql-password=supersecret --mysql-port=4000 \
--mysql-host=10.152.x.10 \
--mysql-db=sysbench  --tables=10 --table-size=5000  --threads=4 \
--events=5000 --report-interval=5 --db-driver=mysql prepare

部署Pump

1.修改 tidb-ansible/inventory.ini 文件

设置 enable_binlog = True,表示 TiDB 集群开启 binlog

## binlog trigger
enable_binlog = True

2.为 pump_servers 主机组添加部署机器 IP

[pump_servers
pump1 ansible_host=10.152.x.10
pump2 ansible_host=10.152.x.11
pump3 ansible_host=10.152.x.12

如果想要为pump专门指定目录, 可以使用deploy_dir

pump1 ansible_host=10.152.x.10 deploy_dir=/data1/pump
pump2 ansible_host=10.152.x.11 deploy_dir=/data2/pump
pump3 ansible_host=10.152.x.12 deploy_dir=/data3/pump

默认 Pump 保留 7 天数据,如需修改可修改 tidb-ansible/conf/pump.yml(TiDB 3.0.2 及之前版本中为 tidb-ansible/conf/pump-cluster.yml)文件中 gc 变量值,并取消注释。

global:
  # an integer value to control the expiry date of the binlog data, which indicates for how long (in days) the binlog data would be stored
  # must be bigger than 0
  # gc: 7

3.部署 pump_servers 和 node_exporters

ansible-playbook deploy.yml --tags=pump -l pump1,pump2,pump3

如果没有为pump指定别名则为

ansible-playbook deploy.yml --tags=pump -l 10.152.x.10,10.152.x.11,10.152.x.12

以上命令中,逗号后不要加空格,否则会报错

4.启动 pump_servers

ansible-playbook start.yml --tags=pump

查看一下

# ps -ef| grep pump
tidb     26199     1  0 21:05 ?        00:00:00 bin/pump --addr=0.0.0.0:8250 --advertise-addr=pump1:8250 --pd-urls=http://10.152.x.10:2379,http://10.152.x.11:2379,http://10.152.x.12:2379 --data-dir=/DATA1/home/tidb/deploy/data.pump --log-file=/DATA1/home/tidb/deploy/log/pump.log --config=conf/pump.toml

5.更新并重启 tidb_servers

为了让enable_binlog = True生效

ansible-playbook rolling_update.yml --tags=tidb

6.更新监控信息

ansible-playbook rolling_update_monitor.yml --tags=prometheus

7.查看 Pump 服务状态

使用 binlogctl 查看 Pump 服务状态,pd-urls 参数请替换为集群 PD 地址,结果 State 为 online 表示 Pump 启动成功

$resources/bin/binlogctl -pd-urls=http://10.152.x.10:2379 -cmd pumps
[2019/12/01 21:11:47.655 +08:00] [INFO] [nodes.go:49] ["query node"] [type=pump] [node="{NodeID: knode10-152-x-12:8250, Addr: pump3:8250, State: online, MaxCommitTS: 412930776489787394, UpdateTime: 2019-12-01 21:11:45 +0800 CST}"]
[2019/12/01 21:11:47.655 +08:00] [INFO] [nodes.go:49] ["query node"] [type=pump] [node="{NodeID: knode10-152-x-10:8250, Addr: pump1:8250, State: online, MaxCommitTS: 412930776463572993, UpdateTime: 2019-12-01 21:11:45 +0800 CST}"]
[2019/12/01 21:11:47.655 +08:00] [INFO] [nodes.go:49] ["query node"] [type=pump] [node="{NodeID: knode10-152-x-11:8250, Addr: pump2:8250, State: online, MaxCommitTS: 412930776489787393, UpdateTime: 2019-12-01 21:11:45 +0800 CST}"]

或者登陆TiDB使用show pump status查看状态

root@10.152.x.10 21:10:32 [(none)]> show pump status;
+-----------------------+------------+--------+--------------------+---------------------+
| NodeID                | Address    | State  | Max_Commit_Ts      | Update_Time         |
+-----------------------+------------+--------+--------------------+---------------------+
| knode10-152-x-10:8250 | pump1:8250 | online | 412930792991752193 | 2019-12-01 21:12:48 |
| knode10-152-x-11:8250 | pump2:8250 | online | 412930793017966593 | 2019-12-01 21:12:48 |
| knode10-152-x-12:8250 | pump3:8250 | online | 412930793017966594 | 2019-12-01 21:12:48 |
+-----------------------+------------+--------+--------------------+---------------------+
3 rows in set (0.01 sec)

部署Drainer

部署Drainer将MySQL作为TiDB从库

1.下载tidb-enterprise-tools安装包

wget http://download.pingcap.org/tidb-enterprise-tools-latest-linux-amd64.tar.gz
tar -zxvf tidb-enterprise-tools-latest-linux-amd64

2.使用TiDB Mydumper备份TiDB数据

$sudo ./tidb-enterprise-tools-latest-linux-amd64/bin/mydumper --ask-password -h 10.152.x.10 -P 4000 -u root --threads=4 --chunk-filesize=64 --skip-tz-utc --regex '^(?!(mysql\.|information_schema\.|performance_schema\.))'  -o /mfw_rundata/dump/ --verbose=3
Enter MySQL Password: 
** Message: 21:30:17.767: Server version reported as: 5.7.25-TiDB-v3.0.5
** Message: 21:30:17.767: Connected to a TiDB server
** Message: 21:30:17.771: Skipping locks because of TiDB
** Message: 21:30:17.772: Set to tidb_snapshot '412931068452667405'
** Message: 21:30:17.782: Started dump at: 2019-12-01 21:30:17

** Message: 21:30:17.782: Written master status
** Message: 21:30:17.784: Thread 1 connected using MySQL connection ID 20
** Message: 21:30:17.794: Thread 1 set to tidb_snapshot '412931068452667405'
** Message: 21:30:17.796: Thread 2 connected using MySQL connection ID 21
** Message: 21:30:17.807: Thread 2 set to tidb_snapshot '412931068452667405'
** Message: 21:30:17.809: Thread 3 connected using MySQL connection ID 22
** Message: 21:30:17.819: Thread 3 set to tidb_snapshot '412931068452667405'
** Message: 21:30:17.820: Thread 4 connected using MySQL connection ID 23
** Message: 21:30:17.832: Thread 4 set to tidb_snapshot '412931068452667405'
** Message: 21:30:17.843: Thread 2 dumping data for `sysbench`.`sbtest1`
** Message: 21:30:17.844: Non-InnoDB dump complete, unlocking tables
** Message: 21:30:17.843: Thread 3 dumping data for `sysbench`.`sbtest2`
** Message: 21:30:17.843: Thread 1 dumping data for `sysbench`.`sbtest10`
** Message: 21:30:17.843: Thread 4 dumping data for `sysbench`.`sbtest3`
** Message: 21:30:17.882: Thread 4 dumping data for `sysbench`.`sbtest4`
** Message: 21:30:17.883: Thread 2 dumping data for `sysbench`.`sbtest5`
** Message: 21:30:17.887: Thread 3 dumping data for `sysbench`.`sbtest6`
** Message: 21:30:17.890: Thread 1 dumping data for `sysbench`.`sbtest7`
** Message: 21:30:17.911: Thread 4 dumping data for `sysbench`.`sbtest8`
** Message: 21:30:17.925: Thread 1 dumping data for `sysbench`.`sbtest9`
** Message: 21:30:17.938: Thread 4 dumping schema for `sysbench`.`sbtest1`
** Message: 21:30:17.939: Thread 4 dumping schema for `sysbench`.`sbtest10`
** Message: 21:30:17.941: Thread 4 dumping schema for `sysbench`.`sbtest2`
** Message: 21:30:17.942: Thread 4 dumping schema for `sysbench`.`sbtest3`
** Message: 21:30:17.943: Thread 4 dumping schema for `sysbench`.`sbtest4`
** Message: 21:30:17.944: Thread 4 dumping schema for `sysbench`.`sbtest5`
** Message: 21:30:17.945: Thread 4 dumping schema for `sysbench`.`sbtest6`
** Message: 21:30:17.946: Thread 4 dumping schema for `sysbench`.`sbtest7`
** Message: 21:30:17.947: Thread 4 dumping schema for `sysbench`.`sbtest8`
** Message: 21:30:17.948: Thread 4 dumping schema for `sysbench`.`sbtest9`
** Message: 21:30:17.949: Thread 4 shutting down
** Message: 21:30:18.079: Thread 2 shutting down
** Message: 21:30:18.084: Thread 3 shutting down
** Message: 21:30:18.087: Thread 1 shutting down
** Message: 21:30:18.087: Finished dump at: 2019-12-01 21:30:18

获取tso值

$sudo cat /mfw_rundata/dump/metadata
Started dump at: 2019-12-01 21:30:17
SHOW MASTER STATUS:
        Log: tidb-binlog
        Pos: 412931068452667405
        GTID:

Finished dump at: 2019-12-01 21:30:18


tso为 412931068452667405

为下游MySQL创建Drainer专用同步账号

CREATE USER IF NOT EXISTS 'drainer'@'%';
ALTER USER 'drainer'@'%' IDENTIFIED BY 'drainer_supersecret';
GRANT INSERT, UPDATE, DELETE, CREATE, DROP, ALTER, EXECUTE, INDEX, SELECT ON *.* TO 'drainer'@'%';

导入数据到MySQL

$sudo ./tidb-enterprise-tools-latest-linux-amd64/bin/loader -d /mfw_rundata/dump/ -h 10.132.2.143 -u drainer -p drainer_supersecret -P 3308 -t 2 -m '' -status-addr ':8723'
2019/12/01 22:12:55 printer.go:52: [info] Welcome to loader
2019/12/01 22:12:55 printer.go:53: [info] Release Version: v1.0.0-76-gad009d9
2019/12/01 22:12:55 printer.go:54: [info] Git Commit Hash: ad009d917b2cdc2a9cc26bc4e7046884c1ff43e7
2019/12/01 22:12:55 printer.go:55: [info] Git Branch: master
2019/12/01 22:12:55 printer.go:56: [info] UTC Build Time: 2019-10-21 06:22:03
2019/12/01 22:12:55 printer.go:57: [info] Go Version: go version go1.12 linux/amd64
2019/12/01 22:12:55 main.go:51: [info] config: {"log-level":"info","log-file":"","status-addr":":8723","pool-size":2,"dir":"/mfw_rundata/dump/","db":{"host":"10.132.2.143","user":"drainer","port":3308,"sql-mode":"","max-allowed-packet":67108864},"checkpoint-schema":"tidb_loader","config-file":"","route-rules":null,"do-table":null,"do-db":null,"ignore-table":null,"ignore-db":null,"rm-checkpoint":false}
2019/12/01 22:12:55 loader.go:532: [info] [loader] prepare takes 0.000565 seconds
2019/12/01 22:12:55 checkpoint.go:207: [info] calc checkpoint finished. finished tables (map[])
2019/12/01 22:12:55 loader.go:715: [info] [loader][run db schema]/mfw_rundata/dump//sysbench-schema-create.sql[start]
2019/12/01 22:12:55 loader.go:720: [info] [loader][run db schema]/mfw_rundata/dump//sysbench-schema-create.sql[finished]
2019/12/01 22:12:55 loader.go:736: [info] [loader][run table schema]/mfw_rundata/dump//sysbench.sbtest10-schema.sql[start]
2019/12/01 22:12:55 loader.go:741: [info] [loader][run table schema]/mfw_rundata/dump//sysbench.sbtest10-schema.sql[finished]
2019/12/01 22:12:55 loader.go:736: [info] [loader][run table schema]/mfw_rundata/dump//sysbench.sbtest3-schema.sql[start]
2019/12/01 22:12:55 loader.go:741: [info] [loader][run table schema]/mfw_rundata/dump//sysbench.sbtest3-schema.sql[finished]
2019/12/01 22:12:55 loader.go:736: [info] [loader][run table schema]/mfw_rundata/dump//sysbench.sbtest9-schema.sql[start]
2019/12/01 22:12:55 loader.go:741: [info] [loader][run table schema]/mfw_rundata/dump//sysbench.sbtest9-schema.sql[finished]
2019/12/01 22:12:55 loader.go:736: [info] [loader][run table schema]/mfw_rundata/dump//sysbench.sbtest2-schema.sql[start]
2019/12/01 22:12:55 loader.go:741: [info] [loader][run table schema]/mfw_rundata/dump//sysbench.sbtest2-schema.sql[finished]
2019/12/01 22:12:55 loader.go:736: [info] [loader][run table schema]/mfw_rundata/dump//sysbench.sbtest4-schema.sql[start]
2019/12/01 22:12:55 loader.go:741: [info] [loader][run table schema]/mfw_rundata/dump//sysbench.sbtest4-schema.sql[finished]
2019/12/01 22:12:55 loader.go:736: [info] [loader][run table schema]/mfw_rundata/dump//sysbench.sbtest5-schema.sql[start]
2019/12/01 22:12:55 loader.go:741: [info] [loader][run table schema]/mfw_rundata/dump//sysbench.sbtest5-schema.sql[finished]
2019/12/01 22:12:55 loader.go:736: [info] [loader][run table schema]/mfw_rundata/dump//sysbench.sbtest7-schema.sql[start]
2019/12/01 22:12:55 loader.go:741: [info] [loader][run table schema]/mfw_rundata/dump//sysbench.sbtest7-schema.sql[finished]
2019/12/01 22:12:55 loader.go:736: [info] [loader][run table schema]/mfw_rundata/dump//sysbench.sbtest6-schema.sql[start]
2019/12/01 22:12:55 loader.go:741: [info] [loader][run table schema]/mfw_rundata/dump//sysbench.sbtest6-schema.sql[finished]
2019/12/01 22:12:55 loader.go:736: [info] [loader][run table schema]/mfw_rundata/dump//sysbench.sbtest8-schema.sql[start]
2019/12/01 22:12:55 loader.go:741: [info] [loader][run table schema]/mfw_rundata/dump//sysbench.sbtest8-schema.sql[finished]
2019/12/01 22:12:55 loader.go:736: [info] [loader][run table schema]/mfw_rundata/dump//sysbench.sbtest1-schema.sql[start]
2019/12/01 22:12:55 loader.go:741: [info] [loader][run table schema]/mfw_rundata/dump//sysbench.sbtest1-schema.sql[finished]
2019/12/01 22:12:55 loader.go:715: [info] [loader][run db schema]/mfw_rundata/dump//test-schema-create.sql[start]
2019/12/01 22:12:55 loader.go:720: [info] [loader][run db schema]/mfw_rundata/dump//test-schema-create.sql[finished]
2019/12/01 22:12:55 loader.go:773: [info] [loader] create tables takes 0.334379 seconds
2019/12/01 22:12:55 loader.go:788: [info] [loader] all data files have been dispatched, waiting for them finished 
2019/12/01 22:12:55 loader.go:158: [info] [loader][restore table data sql]/mfw_rundata/dump//sysbench.sbtest3.sql[start]
2019/12/01 22:12:55 loader.go:158: [info] [loader][restore table data sql]/mfw_rundata/dump//sysbench.sbtest8.sql[start]
2019/12/01 22:12:55 loader.go:216: [info] data file /mfw_rundata/dump/sysbench.sbtest8.sql scanned finished.
2019/12/01 22:12:55 loader.go:216: [info] data file /mfw_rundata/dump/sysbench.sbtest3.sql scanned finished.
2019/12/01 22:12:56 loader.go:165: [info] [loader][restore table data sql]/mfw_rundata/dump//sysbench.sbtest8.sql[finished]
2019/12/01 22:12:56 loader.go:158: [info] [loader][restore table data sql]/mfw_rundata/dump//sysbench.sbtest9.sql[start]
2019/12/01 22:12:56 loader.go:165: [info] [loader][restore table data sql]/mfw_rundata/dump//sysbench.sbtest3.sql[finished]
2019/12/01 22:12:56 loader.go:158: [info] [loader][restore table data sql]/mfw_rundata/dump//sysbench.sbtest4.sql[start]
2019/12/01 22:12:56 loader.go:216: [info] data file /mfw_rundata/dump/sysbench.sbtest9.sql scanned finished.
2019/12/01 22:12:56 loader.go:216: [info] data file /mfw_rundata/dump/sysbench.sbtest4.sql scanned finished.
2019/12/01 22:12:56 loader.go:165: [info] [loader][restore table data sql]/mfw_rundata/dump//sysbench.sbtest4.sql[finished]
2019/12/01 22:12:56 loader.go:158: [info] [loader][restore table data sql]/mfw_rundata/dump//sysbench.sbtest7.sql[start]
2019/12/01 22:12:56 loader.go:165: [info] [loader][restore table data sql]/mfw_rundata/dump//sysbench.sbtest9.sql[finished]
2019/12/01 22:12:56 loader.go:158: [info] [loader][restore table data sql]/mfw_rundata/dump//sysbench.sbtest6.sql[start]
2019/12/01 22:12:56 loader.go:216: [info] data file /mfw_rundata/dump/sysbench.sbtest7.sql scanned finished.
2019/12/01 22:12:56 loader.go:216: [info] data file /mfw_rundata/dump/sysbench.sbtest6.sql scanned finished.
2019/12/01 22:12:56 loader.go:165: [info] [loader][restore table data sql]/mfw_rundata/dump//sysbench.sbtest7.sql[finished]
2019/12/01 22:12:56 loader.go:158: [info] [loader][restore table data sql]/mfw_rundata/dump//sysbench.sbtest1.sql[start]
2019/12/01 22:12:57 loader.go:165: [info] [loader][restore table data sql]/mfw_rundata/dump//sysbench.sbtest6.sql[finished]
2019/12/01 22:12:57 loader.go:158: [info] [loader][restore table data sql]/mfw_rundata/dump//sysbench.sbtest10.sql[start]
2019/12/01 22:12:57 loader.go:216: [info] data file /mfw_rundata/dump/sysbench.sbtest1.sql scanned finished.
2019/12/01 22:12:57 loader.go:216: [info] data file /mfw_rundata/dump/sysbench.sbtest10.sql scanned finished.
2019/12/01 22:12:57 loader.go:165: [info] [loader][restore table data sql]/mfw_rundata/dump//sysbench.sbtest10.sql[finished]
2019/12/01 22:12:57 loader.go:158: [info] [loader][restore table data sql]/mfw_rundata/dump//sysbench.sbtest2.sql[start]
2019/12/01 22:12:57 loader.go:165: [info] [loader][restore table data sql]/mfw_rundata/dump//sysbench.sbtest1.sql[finished]
2019/12/01 22:12:57 loader.go:158: [info] [loader][restore table data sql]/mfw_rundata/dump//sysbench.sbtest5.sql[start]
2019/12/01 22:12:57 loader.go:216: [info] data file /mfw_rundata/dump/sysbench.sbtest2.sql scanned finished.
2019/12/01 22:12:57 loader.go:216: [info] data file /mfw_rundata/dump/sysbench.sbtest5.sql scanned finished.
2019/12/01 22:12:57 loader.go:165: [info] [loader][restore table data sql]/mfw_rundata/dump//sysbench.sbtest2.sql[finished]
2019/12/01 22:12:57 loader.go:165: [info] [loader][restore table data sql]/mfw_rundata/dump//sysbench.sbtest5.sql[finished]
2019/12/01 22:12:57 loader.go:791: [info] [loader] all data files has been finished, takes 2.037124 seconds
2019/12/01 22:12:57 main.go:88: [info] loader stopped and exits 

3.修改 tidb-ansible/inventory.ini 文件

drainer_servers 主机组添加部署机器 IP,initial_commit_ts 请设置为获取的 initial_commit_ts,仅用于 Drainer 第一次启动

[drainer_servers]
drainer_mysql ansible_host=10.152.x.12 initial_commit_ts="412931068452667405"

修改配置文件

[tidb@knode10-152-x-133 21:33:21 ~/tidb-ansible]
$cp conf/drainer.toml conf/drainer_mysql_drainer.toml

vim conf/drainer_mysql_drainer.toml

db-type = "mysql"

[syncer.to]
host = "10.132.2.143"
user = "drainer"
password = "drainer_supersecret"
port = 3308

4.部署Drainer

ansible-playbook deploy_drainer.yml

5.启动 Drainer

ansible-playbook start_drainer.yml

6.查看Drainer状态

root@10.152.x.10 22:06:49 [(none)]> show drainer status;
+-----------------------+------------------+--------+--------------------+---------------------+
| NodeID                | Address          | State  | Max_Commit_Ts      | Update_Time         |
+-----------------------+------------------+--------+--------------------+---------------------+
| knode10-152-x-12:8249 | 10.152.x.12:8249 | online | 412931643727675393 | 2019-12-01 22:06:52 |
+-----------------------+------------------+--------+--------------------+---------------------+
1 row in set (0.00 sec)


$resources/bin/binlogctl -pd-urls=http://10.152.x.10:2379 -cmd drainers
[2019/12/01 22:07:27.531 +08:00] [INFO] [nodes.go:49] ["query node"] [type=drainer] [node="{NodeID: knode10-152-x-12:8249, Addr: 10.152.x.12:8249, State: online, MaxCommitTS: 412931651605102594, UpdateTime: 2019-12-01 22:07:26 +0800 CST}"]

 类似资料: