当前位置: 首页 > 工具软件 > DBLE > 使用案例 >

MySQL分库分表dble初次尝试

习洲
2023-12-01

安装MySQL

  • MySQL的安装过程省略,请自行安装。
  • MySQL版本:5.7.26

MySQL结构如下:

主库IP主库端口从库IP从库端口datahost
10.216.91.118331110.216.91.119,10.216.91.1203311dataHost1
10.216.91.11833123312dataHost2

配置MySQL

  • 开启gtid
  • 配置增强半同步复制
  • 在3311实例创建库db1,db3,db5
  • 在3312实例创建库db2,db4,db6
  • 创建test用户
    • dble连接后端数据库的权限:
      SELECT,INSERT,UPDATE,DELETE,FILE,CREATE,DROP,ALTER,LOCK TABLES,ALTER ROUTINE,CREATE ROUTINE,EXECUTE,INDEX,SUPER,SHOW DATABASES,REPLICATION CLIENT,REFERENCES

安装zookeeper

IPclientportbasedirdatadir
10.216.91.1172881/data/service/zookeeper/data/service_data/zookeeper
10.216.91.1182881/data/service/zookeeper/data/service_data/zookeeper
10.216.91.1192881/data/service/zookeeper/data/service_data/zookeeper
  1. 下载
sudo wget https://mirrors.tuna.tsinghua.edu.cn/apache/zookeeper/zookeeper-3.4.14/zookeeper-3.4.14.tar.gz

解压文件至/data/service/zookeeper
  1. 创建目录
mkdir -p /data/service_data/zookeeper/{data,logs}
  1. 配置文件
cd /data/service/zookeeper
cp zoo_sample.cfg zoo.cfg
vim zoo.cfg
# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial 
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between 
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just 
# example sakes.
dataDir=/data/service_data/zookeeper/data
# the port at which the clients will connect
clientPort=2881
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the 
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1
server.1=10.216.91.117:2882:3882
server.2=10.216.91.118:2882:3882
server.3=10.216.91.119:2882:3882

注意需要三个端口:

  • clientport
  • follower和leader交换信息使用的端口
  • 选举leader使用的端口
  1. 配置集群id
echo 1 > /data/service_data/zookeeper/data/myid

注意:myid位于dataDir目录下,每个节点中的myid都不同。

  1. 启动
cd /data/service/zookeeper/bin
sh zkServer.sh start
  1. 查看运行状态
sh zkServer.sh status
  1. 客户端连接
sh zkCli.sh -server 10.216.91.117:2881

安装haproxy

安装过程省略。

配置文件如下:

global
    log 127.0.0.1 local0
    maxconn 4096
    chroot /usr/local/haproxy
    user haproxy
    group haproxy
    daemon
    pidfile /usr/local/haproxy/logs/haproxy.pid

defaults
    log global
    mode tcp
    retries 3
    option redispatch
    log 127.0.0.1 local0 
    maxconn 2000
    timeout connect 3s
    timeout client 3600s
    timeout server 3600s

listen stats
  bind 0.0.0.0:8000
  mode http
  stats enable
  stats uri /admin
  stats refresh 5s
  stats show-node
  stats show-legends
  stats hide-version
  stats auth admin:admin
  stats realm Haproxy Manager

frontend dble-write
    bind *:8066
    mode tcp
    default_backend dble-write-back

backend dble-write-back
    mode tcp
    balance leastconn
    option tcplog
    server dble1 10.216.91.117:8066 check port 8066 inter 5000 rise 3 fall 3
    server dble2 10.216.91.118:8066 check port 8066 inter 5000 rise 3 fall 3

安装DBLE

IP业务端口管理端口安装路径
10.216.91.11780669066/data/apps/dble
10.216.91.11880669066/data/apps/dble

下载并安装

  1. 下载地址
  2. 解压并安装
tar -xvf actiontech-dble-2.20.04.0.tar.gz -C /data/apps/
cd /data/apps/dble/conf 
cp rule_template.xml rule.xml 
cp schema_template.xml schema.xml 
cp server_template.xml server.xml

配置DBLE

scheme.xml

<!DOCTYPE dble:schema SYSTEM "schema.dtd">
<dble:schema xmlns:dble="http://dble.cloud/" version="3.0">
    <schema name="testdb">
        <table name="tb_enum_sharding" dataNode="dn1,dn2" rule="rule_enum" cacheKey="ID"/>
        <table name="tb_range_sharding" dataNode="dn1,dn2,dn3" rule="rule_range"/>
        <table name="t1" dataNode="dn1,dn2" rule="rule_id_hash" incrementColumn="id"/>
        <table name="sptb_jz_user" dataNode="dn1,dn2" rule="rule_id_hash"/>
        <table name="jz_user" dataNode="dn1,dn2" rule="rule_id_hash"/>
        <table name="sptb_jz_apply" dataNode="dn1,dn2" rule="rule_userid_hash"/>
        <table name="jz_apply" dataNode="dn1,dn2" rule="rule_userid_hash"/>
        <table name="sptb_jz_user2" dataNode="dn1,dn2" rule="rule_common_hash">
            <childTable name="sptb_jz_apply2" joinKey="user_id" parentKey="id"/>
        </table>
        <table name="tb_global1" dataNode="dn1,dn2" type="global"/>
        <table name="tb_single" dataNode="dn6"/>
    </schema>
    <schema name="testdb1">
        <table name="jz_user_tmp" dataNode="dn1,dn2" rule="rule_id_hash" cacheKey="id"/>
    </schema>
    <schema name="testdb2" dataNode="dn5"/>
    <schema name="tmp" dataNode="dn3"/>
    <schema name="dtle" dataNode="dn_dtle"/>
    <schema name="sbtest">
        <table name="sbtest1" dataNode="dn$1-6" rule="hash_sysbench" cacheKey="id"/>
    </schema>
    <dataNode name="dn1" dataHost="dataHost1" database="db_1"/>
    <dataNode name="dn2" dataHost="dataHost2" database="db_2"/>
    <dataNode name="dn3" dataHost="dataHost1" database="db_3"/>
    <dataNode name="dn4" dataHost="dataHost2" database="db_4"/>
    <dataNode name="dn5" dataHost="dataHost1" database="db_5"/>
    <dataNode name="dn6" dataHost="dataHost2" database="db_6"/>
    <dataNode name="dn_dtle" dataHost="dataHost1" database="db_dtle"/>
    <dataHost balance="0" maxCon="1000" minCon="10" name="dataHost1" slaveThreshold="100">
        <heartbeat>show slave status</heartbeat>
        <writeHost host="10_216_91_119" url="10.216.91.119:3311" password="doumi1.q" user="test" disabled="false" id="hostS1" weight="0">
            <readHost host="10_216_91_120" url="10.216.91.120:3311" password="doumi1.q" user="test" disabled="false" id="hostS1" weight="0"/>
            <readHost host="10_216_91_118" url="10.216.91.118:3311" password="doumi1.q" user="test" disabled="false" id="hostM1" weight="0"/>
        </writeHost>
    </dataHost>
    <dataHost balance="0" maxCon="1000" minCon="10" name="dataHost2" slaveThreshold="100">
        <heartbeat>show slave status</heartbeat>
        <writeHost host="10_216_91_118" url="10.216.91.118:3312" password="doumi1.q" user="test" disabled="false" id="hostM2" weight="0">
            <readHost host="10_216_91_120" url="10.216.91.120:3312" password="doumi1.q" user="test" disabled="false" id="hostS2" weight="0"/>
            <readHost host="10_216_91_119" url="10.216.91.119:3312" password="doumi1.q" user="test" disabled="false" id="hostS2" weight="0"/>
        </writeHost>
    </dataHost>
</dble:schema>

rule.xml

<!DOCTYPE dble:rule SYSTEM "rule.dtd">
<dble:rule xmlns:dble="http://dble.cloud/" version="3.0">
    <tableRule name="rule_enum">
        <rule>
            <columns>code</columns>
            <algorithm>func_enum</algorithm>
        </rule>
    </tableRule>
    <tableRule name="rule_range">
        <rule>
            <columns>id</columns>
            <algorithm>func_range</algorithm>
        </rule>
    </tableRule>
    <tableRule name="rule_common_hash">
        <rule>
            <columns>id</columns>
            <algorithm>func_common_hash</algorithm>
        </rule>
    </tableRule>
    <tableRule name="rule_id_hash">
        <rule>
            <columns>id</columns>
            <algorithm>hash2_function</algorithm>
        </rule>
    </tableRule>
    <tableRule name="rule_userid_hash">
        <rule>
            <columns>user_id</columns>
            <algorithm>hash2_function</algorithm>
        </rule>
    </tableRule>
    <tableRule name="rule_common_hash2">
        <rule>
            <columns>id2</columns>
            <algorithm>func_common_hash</algorithm>
        </rule>
    </tableRule>
    <tableRule name="rule_uneven_hash">
        <rule>
            <columns>id</columns>
            <algorithm>func_uneven_hash</algorithm>
        </rule>
    </tableRule>
    <tableRule name="rule_mod">
        <rule>
            <columns>id</columns>
            <algorithm>func_mod</algorithm>
        </rule>
    </tableRule>
    <tableRule name="rule_jumpHash">
        <rule>
            <columns>code</columns>
            <algorithm>func_jumpHash</algorithm>
        </rule>
    </tableRule>
    <tableRule name="rule_hashString">
        <rule>
            <columns>code</columns>
            <algorithm>func_hashString</algorithm>
        </rule>
    </tableRule>
    <tableRule name="rule_date">
        <rule>
            <columns>create_date</columns>
            <algorithm>func_date</algorithm>
        </rule>
    </tableRule>
    <tableRule name="rule_pattern">
        <rule>
            <columns>id</columns>
            <algorithm>func_pattern</algorithm>
        </rule>
    </tableRule>
    <tableRule name="hash_sysbench">
        <rule>
            <columns>id</columns>
            <algorithm>func_hash_sysbench</algorithm>
        </rule>
    </tableRule>
    <function name="func_enum" class="Enum">
        <property name="mapFile">partition-enum.txt</property>
        <property name="defaultNode">0</property>
        <property name="type">0</property>
    </function>
    <function name="func_range" class="NumberRange">
        <property name="mapFile">partition-number-range.txt</property>
        <property name="defaultNode">0</property>
    </function>
    <function name="func_common_hash" class="Hash">
        <property name="partitionCount">2</property>
        <property name="partitionLength">512</property>
    </function>
    <function name="hash2_function" class="Hash">
        <property name="partitionCount">2</property>
        <property name="partitionLength">512</property>
    </function>
    <function name="func_uneven_hash" class="Hash">
        <property name="partitionCount">2,1</property>
        <property name="partitionLength">256,512</property>
    </function>
    <function name="func_mod" class="Hash">
        <property name="partitionCount">4</property>
        <property name="partitionLength">1</property>
    </function>
    <function name="func_jumpHash" class="jumpStringHash">
        <property name="partitionCount">2</property>
        <property name="hashSlice">0:2</property>
    </function>
    <function name="func_hashString" class="StringHash">
        <property name="partitionCount">4</property>
        <property name="partitionLength">256</property>
        <property name="hashSlice">0:2</property>
    </function>
    <function name="func_date" class="Date">
        <property name="dateFormat">yyyy-MM-dd</property>
        <property name="sBeginDate">2015-01-01</property>
        <property name="sEndDate">2015-01-31 </property>
        <property name="sPartionDay">10</property>
        <property name="defaultNode">0</property>
    </function>
    <function name="func_pattern" class="PatternRange">
        <property name="mapFile">partition-pattern.txt</property>
        <property name="patternValue">1024</property>
        <property name="defaultNode">0</property>
    </function>
    <function name="func_hash_sysbench" class="Hash">
        <property name="partitionCount">6</property>
        <property name="partitionLength">1</property>
    </function>
</dble:rule>

server.xml

<!DOCTYPE dble:server SYSTEM "server.dtd">
<dble:server xmlns:dble="http://dble.cloud/" version="3.0">
    <system>
        <property name="processors">4</property>
        <property name="backendProcessors">4</property>
        <property name="processorExecutor">8</property>
        <property name="backendProcessorExecutor">6</property>
        <property name="sequenceHandlerType">2</property>
        <property name="serverBacklog">2048</property>
        <property name="useThreadUsageStat">1</property>
        <property name="showBinlogStatusTimeout">10000</property>
        <property name="charset">utf8mb4</property>
        <property name="maxPacketSize">4194304</property>
        <property name="txIsolation">2</property>
        <property name="autocommit">1</property>
        <property name="checkTableConsistency">0</property>
        <property name="checkTableConsistencyPeriod">60000</property>
        <property name="dataNodeIdleCheckPeriod">300000</property>
        <property name="dataNodeHeartbeatPeriod">10000</property>
        <property name="processorCheckPeriod">1000</property>
        <property name="sqlExecuteTimeout">300</property>
        <property name="idleTimeout">1800000</property>
        <property name="recordTxn">0</property>
        <property name="xaSessionCheckPeriod">1000</property>
        <property name="xaLogCleanPeriod">1000</property>
        <property name="useJoinStrategy">true</property>
        <property name="nestLoopConnSize">4</property>
        <property name="nestLoopRowsSize">2000</property>
        <property name="otherMemSize">4</property>
        <property name="orderMemSize">4</property>
        <property name="joinMemSize">4</property>
        <property name="bufferPoolChunkSize">4096</property>
        <property name="bufferPoolPageNumber">512</property>
        <property name="bufferPoolPageSize">2097152</property>
        <property name="useSqlStat">1</property>
        <property name="enableSlowLog">1</property>
        <property name="bufferUsagePercent">80</property>
        <property name="clearBigSQLResultSetMapMs">600000</property>
        <property name="sqlRecordCount">10</property>
        <property name="maxResultSet">524288</property>
        <property name="flushSlowLogPeriod">1</property>
        <property name="flushSlowLogSize">1000</property>
        <property name="sqlSlowTime">100</property>
        <property name="enableFlowControl">true</property>
        <property name="flowControlStartThreshold">4096</property>
        <property name="flowControlStopThreshold">256</property>
        <property name="useOuterHa">true</property>
    </system>
    <firewall>
        <blacklist check="true">
            <property name="renameTableAllow">true</property>
        </blacklist>
    </firewall>
    <user name="man1">
        <property name="password">654321</property>
        <property name="manager">true</property>
    </user>
    <user name="root">
        <property name="password">123456</property>
        <property name="schemas">testdb,testdb2,tmp,dtle,testdb1</property>
    </user>
    <user name="test">
        <property name="password">111111</property>
        <property name="schemas">sbtest</property>
    </user>
    <user name="user">
        <property name="password">AqEkFEuIFAX6g2TJQnp4cJ2r7Yc0Z4/KBsZqKhT8qSz18Aj91e8lxO49BKQElC6OFfW4c38pCYa8QGFTub7pnw==</property>
        <property name="usingDecrypt">1</property>
        <property name="schemas">testdb</property>
        <property name="readOnly">true</property>
        <property name="maxCon">100</property>
    </user>
</dble:server>

全局唯一序列

dble实现全局唯一ID有四种方式:

  • MySQL offset-step
  • 分布式MySQL offset-step
  • 时间戳方式
  • 分布式时间戳

我们使用时间戳的方式,配置文件如下:

sequence_time_conf.properties
#sequence depend on TIME
WORKID=01
DATAACENTERID=01
START_TIME=2020-01-01 00:00:00

注意:

  • workid 必须为[0,31]之间的整数
  • datacenterid 必须为[0,31]之间的整数
  • WORKID,DATAACENTERID的配置必须使该dble实例在dble集群中唯一
  • 全局序列对应字段为bigint

日志清理配置

log4j2.xml文件修改以下部分:

        <DefaultRolloverStrategy max="100">
            <Delete basePath="logs" maxDepth="2">
                <IfFileName glob="*/dble-*.log.gz">
                    <IfLastModified age="3d">
                        <IfAny>
                            <IfAccumulatedFileSize exceeds="1GB"/>
                            <IfAccumulatedFileCountexceeds="10"/>
                        </IfAny>
                    </IfLastModified>
                </IfFileName>
            </Delete>
        </DefaultRolloverStrategy>

dble集群配置

myid.properties配置如下:

#set false if not use cluster ucore/zk
cluster=zk
#clinet info
ipAddress=10.216.91.117:2881,10.216.91.118:2881,10.216.91.119:2881
port=5700
#cluster namespace, please use the same one in one cluster
clusterId=dble-cluster-1
#it must be different for every node in cluster
myid=bw-sys-k8s-v02
serverID=server_02
clusterHa=true

配置参数clusterHa = true

  • 当此参数启用时,集群状态的dble将会在集群中同步自身的dataSource的状态
  • 此配置在server.xml中useOuterHa参数为false时不生效
  • 当useOuterHa参数为true但clusterHa不会true时,dble可以执行高可用切换的所有指令,但是其行为退化为单机dble,需要人工进行集群中多个dble的状态同步
  • 注意:此参数的调整需要重启dble服务

MySQL-HA

开启DBLE参数

外部ha启用参数

  • server.xml中system
<property name="useOuterHa">true</property>
  • myid.properties
配置参数clusterHa = true

配置MHA

MHA安装过程省略。

配置文件如下:

[server default]
manager_log=/etc/masterha/dble1/manager.log
manager_workdir=/etc/masterha/dble1
master_ip_failover_script="/etc/masterha/dble1/master_ip_failover"
master_ip_online_change_script="/etc/masterha/dble1/master_ip_online_change"
password="wxasd.q"
ping_interval=1
repl_password="wxasd.q"
repl_user=dmrepl
report_script="/etc/masterha/dble1/send_report"
secondary_check_script=/bin/masterha_secondary_check -s 10.216.91.120 --user=root --master_host=bw-sys-k8s-v03 --master_ip=10.216.91.118 --master_port=3311
ssh_user=root
user=root

[server1]
candidate_master=1
hostname=10.216.91.118
master_binlog_dir=/data/mysql/3311_mysql/binlogdir
port=3311

[server2]
candidate_master=1
hostname=10.216.91.119
master_binlog_dir=/data/mysql/3311_mysql/binlogdir
port=3311
ignore_fail=1

[server3]
candidate_master=1
hostname=10.216.91.120
master_binlog_dir=/data/mysql/3311_mysql/binlogdir
port=3311
ignore_fail=1

master_ip_failover

#!/usr/bin/env perl

#  Copyright (C) 2011 DeNA Co.,Ltd.
#
#  This program is free software; you can redistribute it and/or modify
#  it under the terms of the GNU General Public License as published by
#  the Free Software Foundation; either version 2 of the License, or
#  (at your option) any later version.
#
#  This program is distributed in the hope that it will be useful,
#  but WITHOUT ANY WARRANTY; without even the implied warranty of
#  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
#  GNU General Public License for more details.
#
#  You should have received a copy of the GNU General Public License
#   along with this program; if not, write to the Free Software
#  Foundation, Inc.,
#  51 Franklin Street, Fifth Floor, Boston, MA  02110-1301  USA

## Note: This is a sample script and is not complete. Modify the script based on your environment.

use strict;
use warnings FATAL => 'all';

use Getopt::Long;
use MHA::DBHelper;

my (
  $command,        $ssh_user,         $orig_master_host,
  $orig_master_ip, $orig_master_port, $new_master_host,
  $new_master_ip,  $new_master_port,  $new_master_user,
  $new_master_password
);
GetOptions(
  'command=s'             => \$command,
  'ssh_user=s'            => \$ssh_user,
  'orig_master_host=s'    => \$orig_master_host,
  'orig_master_ip=s'      => \$orig_master_ip,
  'orig_master_port=i'    => \$orig_master_port,
  'new_master_host=s'     => \$new_master_host,
  'new_master_ip=s'       => \$new_master_ip,
  'new_master_port=i'     => \$new_master_port,
  'new_master_user=s'     => \$new_master_user,
  'new_master_password=s' => \$new_master_password,
);

exit &main();

sub main {
  if ( $command eq "stop" || $command eq "stopssh" ) {

    # $orig_master_host, $orig_master_ip, $orig_master_port are passed.
    # If you manage master ip address at global catalog database,
    # invalidate orig_master_ip here.
    my $exit_code = 1;
    eval {

      # 调用对应的disable命令,使得部分节点不可写 
      $orig_master_host =~tr/./_/; 
      system "mysql -P9066 -u man1 -p654321 -h 10.216.91.117 -e \"dataHost \@\@disable name = 'dataHost1' node='".$orig_master_host."'\"";
      # updating global catalog, etc
      $exit_code = 0;
    };
    if ($@) {
      warn "Got Error: $@\n";
      exit $exit_code;
    }
    exit $exit_code;
  }
  elsif ( $command eq "start" ) {

    # all arguments are passed.
    # If you manage master ip address at global catalog database,
    # activate new_master_ip here.
    # You can also grant write access (create user, set read_only=0, etc) here.
    my $exit_code = 10;
    eval {
      my $new_master_handler = new MHA::DBHelper();

      # args: hostname, port, user, password, raise_error_or_not
      $new_master_handler->connect( $new_master_ip, $new_master_port,
        $new_master_user, $new_master_password, 1 );

      ## Set read_only=0 on the new master
      $new_master_handler->disable_log_bin_local();
      print "Set read_only=0 on the new master.\n";
      $new_master_handler->disable_read_only();

      ## Creating an app user on the new master
      print "Creating app user on the new master..\n";
      $new_master_handler->enable_log_bin_local();
      $new_master_handler->disconnect();

      ## try to switch the dataHost master into new master
      ## 调用dataHost switch的命令,将新的new_master_host节点提升
      $new_master_host =~tr/./_/;
      system "mysql -P9066 -u man1 -p654321 -h 10.216.91.117 -e \"dataHost \@\@switch name = 'dataHost1' master='".$new_master_host."'\"";

      ## Update master ip on the catalog database, etc
      $exit_code = 0;
    };
    if ($@) {
      warn $@;

      # If you want to continue failover, exit 10.
      exit $exit_code;
    }
    exit $exit_code;
  }
  elsif ( $command eq "status" ) {

    # do nothing
    exit 0;
  }
  else {
    &usage();
    exit 1;
  }
}

sub usage {
  print
"Usage: master_ip_failover --command=start|stop|stopssh|status --orig_master_host=host --orig_master_ip=ip --orig_master_port=port --new_master_host=host --new_master_ip=ip --new_master_port=port\n";
}

注意:

  • writeHost host=“10_216_91_118” 中host必须是下划线连接,因为
  • master_ip_failover脚本中$orig_master_host =~tr/./_/; 会将ip地址转换成下划线连接。

启动MHA

nohup /bin/masterha_manager --conf=/etc/masterha/dble1/dble1.cnf --ignore_fail_on_start  --ignore_last_failover> /etc/masterha/dble1/mha_manager.log 2>&1 &

启动DBLE

DBLE集群使用了zk,启动过程中逐个启动dble节点。

cd /data/apps/dble/bin
sh dble start

sysbench压测

清除数据

/usr/share/sysbench/oltp_read_write.lua \
 --mysql-db=sbtest \
 --mysql-host=10.216.91.117 --mysql-port=8066 \
 --mysql-user=test --mysql-password=111111 \
 --auto_inc=off --tables=1 --table-size=100000 \
 --threads=4 --time=30 --report-interval=1 --max-requests=0 \
 --percentile=95 --db-ps-mode=disable --skip-trx=on cleanup

准备数据

/usr/share/sysbench/oltp_read_write.lua \
--mysql-db=sbtest --mysql-host=10.216.91.117 --mysql-port=8066 \
--mysql-user=test --mysql-password=111111 --auto_inc=off --tables=1 \
--table-size=100000 --threads=4 --time=30 --report-interval=1 --max-requests=0 \
 --percentile=95 --db-ps-mode=disable --skip-trx=on prepare

执行压测

/usr/share/sysbench/oltp_read_write.lua \
--mysql-db=sbtest --mysql-host=10.216.91.117 --mysql-port=8066 \
--mysql-user=test --mysql-password=111111 --auto_inc=off --tables=1 \
--table-size=100000 --threads=4 --time=30 --report-interval=1 --max-requests=0 \
--percentile=95 --db-ps-mode=disable --skip-trx=on run

DBLE使用规范

智能ER表

符合智能ER的条件:

  • ER表的dataNode必须相同
  • ER表的分片规则可以不通
  • ER表的分片算法必须相同

案例:

  1. ER表1:sptb_jz_user2,sptb_jz_apply2
  2. ER表2:sptb_jz_user,sptb_jz_apply

ER表1为传统ER表,ER表2则为智能ER表。

查看执行计划:

  1. select t1.id,t2.user_id from sptb_jz_user t1 inner join sptb_jz_apply t2 on t1.id=t2.user_id;
+-----------------+---------------+-------------------------------------------------------------------------------------------------------------------------------+
| DATA_NODE       | TYPE          | SQL/REF                                                                                                                       |
+-----------------+---------------+-------------------------------------------------------------------------------------------------------------------------------+
| dn1_0           | BASE SQL      | select `t1`.`id`,`t2`.`user_id` from  `sptb_jz_user` `t1` join  `sptb_jz_apply` `t2` on `t1`.`id` = `t2`.`user_id` where 1=1  |
| dn2_0           | BASE SQL      | select `t1`.`id`,`t2`.`user_id` from  `sptb_jz_user` `t1` join  `sptb_jz_apply` `t2` on `t1`.`id` = `t2`.`user_id` where 1=1  |
| merge_1         | MERGE         | dn1_0; dn2_0                                                                                                                  |
| shuffle_field_1 | SHUFFLE_FIELD | merge_1                                                                                                                       |
+-----------------+---------------+-------------------------------------------------------------------------------------------------------------------------------+
4 rows in set (0.01 sec)
  1. select t1.id,t2.user_id from sptb_jz_user2 t1 inner join sptb_jz_apply2 t2 on t1.id=t2.user_id;
+-----------------+---------------+---------------------------------------------------------------------------------------------------------------------------------+
| DATA_NODE       | TYPE          | SQL/REF                                                                                                                         |
+-----------------+---------------+---------------------------------------------------------------------------------------------------------------------------------+
| dn1_0           | BASE SQL      | select `t1`.`id`,`t2`.`user_id` from  `sptb_jz_user2` `t1` join  `sptb_jz_apply2` `t2` on `t1`.`id` = `t2`.`user_id` where 1=1  |
| dn2_0           | BASE SQL      | select `t1`.`id`,`t2`.`user_id` from  `sptb_jz_user2` `t1` join  `sptb_jz_apply2` `t2` on `t1`.`id` = `t2`.`user_id` where 1=1  |
| merge_1         | MERGE         | dn1_0; dn2_0                                                                                                                    |
| shuffle_field_1 | SHUFFLE_FIELD | merge_1                                                                                                                         |
+-----------------+---------------+---------------------------------------------------------------------------------------------------------------------------------+
4 rows in set (0.00 sec)

schema规范

  • 规定每个逻辑schema必须对应不同的dataNode
  • 不同逻辑schema下的逻辑表可以相同,但是对应的物理表必须不同,即相同逻辑table对应的dateNode必须不同
  • dataNode对应的后端database必须不同
  • 一个物理表不能配置给多个逻辑表使用
 类似资料: