redis是什么?
redis官方介绍:introduction to redis
安装:
install 拉到最下面的install小节
wget http://download.redis.io/releases/redis-4.0.11.tar.gz tar zxvf redis-4.0.11.tar.gz cd redis-4.0.11 make
运行:
[root@T9 redis-4.0.11]# ./src/redis-server 14015:C 12 Oct 15:17:21.602 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo 14015:C 12 Oct 15:17:21.602 # Redis version=4.0.11, bits=64, commit=00000000, modified=0, pid=14015, just started 14015:C 12 Oct 15:17:21.602 # Warning: no config file specified, using the default config. In order to specify a config file use ./src/redis-server /path/to/redis.conf 14015:M 12 Oct 15:17:21.603 * Increased maximum number of open files to 10032 (it was originally set to 1024). _._ _.-``__ ''-._ _.-`` `. `_. ''-._ Redis 4.0.11 (00000000/0) 64 bit .-`` .-```. ```\/ _.,_ ''-._ ( ' , .-` | `, ) Running in standalone mode |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379 | `-._ `._ / _.-' | PID: 14015 `-._ `-._ `-./ _.-' _.-' |`-._`-._ `-.__.-' _.-'_.-'| | `-._`-._ _.-'_.-' | http://redis.io `-._ `-._`-.__.-'_.-' _.-' |`-._`-._ `-.__.-' _.-'_.-'| | `-._`-._ _.-'_.-' | `-._ `-._`-.__.-'_.-' _.-' `-._ `-.__.-' _.-' `-._ _.-' `-.__.-' 14015:M 12 Oct 15:17:21.603 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128. 14015:M 12 Oct 15:17:21.603 # Server initialized 14015:M 12 Oct 15:17:21.603 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect. 14015:M 12 Oct 15:17:21.603 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled. 14015:M 12 Oct 15:17:21.604 * Ready to accept connections
测试:
[root@T9 redis-4.0.11]# ./src/redis-cli 127.0.0.1:6379> set foo bar OK 127.0.0.1:6379> get foo "bar" 127.0.0.1:6379> get bar (nil) 127.0.0.1:6379>
tutorial
The book is many years old, but still relevant. Redis has evolved a lot, but most of that has been in the form of internal improvements,
new advanced features (like lua scripting) and awesome new data types. The best way to learn Redis is still to start by understanding
the fundamentals presented in this book.
读书笔记:
[redis] <<The little Redis book>>的读书笔记
其他内容摘要
在cli上查看server状态
127.0.0.1:6379> info # Server redis_version:4.0.11 redis_git_sha1:00000000 redis_git_dirty:0 。。。
摘录:
# 传统关系型数据库,用表,表示一切。
If we were to apply this data structure concept to the relational world, we could say that databases expose a single data structure - tables.
# redis,用五种基本数据结构。 To me, that defines Redis’ approach. If you are dealing with scalars, lists, hashes, or sets, why not store them as scalars, lists, hashes and sets?
概念
数据库
用数字代表,默认是0。用select命令切换。如下:
127.0.0.1:6379> select 1 OK 127.0.0.1:6379[1]> select 0 OK 127.0.0.1:6379>
关于K-V
While Redis is more than just a key-value store, at its core, every one of Redis’ five data structures has at least a key and a value.
全部redis命令的详细列表
https://redis.io/commands
redis cluster的文档
HA:https://redis.io/topics/sentinel
https://redis.io/topics/cluster-tutorial
https://redis.io/topics/cluster-spec
创建redis cluster
1. 创建配置文件
[root@T9 cluster]# pwd /root/worklist/20181012-redis/cluster [root@T9 cluster]# cat redis.conf port 7000 cluster-enabled yes cluster-config-file nodes.conf cluster-node-timeout 5000 appendonly yes [root@T9 cluster]#
2. 为6个实例创建配置文件
[root@T9 cluster]# tree . ├── 7000 │ └── redis.conf ├── 7001 │ └── redis.conf ├── 7002 │ └── redis.conf ├── 7003 │ └── redis.conf ├── 7004 │ └── redis.conf ├── 7005 │ └── redis.conf
3. 分别启动6个实例
[root@T9 7000]# ../../redis-4.0.11/src/redis-server ./redis.conf
4. 创建
[root@T9 redis-4.0.11]# ./src/redis-cli --cluster create 127.0.0.1:7000 127.0.0.1:7001 127.0.0.1:7002 127.0.0.1:7003 127.0.0.1:7004 127.0.0.1:7005 --cluster-replicas 1 Unrecognized option or bad number of args for: '--cluster' [root@T9 redis-4.0.11]#
创建没成功。 redis-cli不支持这个命令了,可能。
看了create-cluster脚本,里边是这样的:
[root@T9 redis-4.0.11]# ./src/redis-trib.rb create --replicas 1 127.0.0.1:7000 127.0.0.1:7001 127.0.0.1:7002 127.0.0.1:7003 127.0.0.1:7004 127.0.0.1:7005
在此之前,还需要安装:
yum install ruby yum install rubygem-redis
上边的命令在5.0里也旧了。最新的写法是这样的(所以上边那两个ruby,其实也可以不装):
redis-cli --cluster create \ > 192.168.2.205:7000 192.168.2.205:7001 192.168.2.205:7002 192.168.2.205:7003 192.168.2.205:7004 192.168.2.205:7005 \ > --cluster-replicas 1
成功之后,master大概会打印这些东西:
1838:M 16 Oct 16:45:32.268 # configEpoch set to 1 via CLUSTER SET-CONFIG-EPOCH 1838:M 16 Oct 16:45:32.284 # IP address for this node updated to 127.0.0.1 1838:M 16 Oct 16:45:37.224 # Cluster state changed: ok 1838:M 16 Oct 16:45:37.551 * Slave 127.0.0.1:7003 asks for synchronization 1838:M 16 Oct 16:45:37.551 * Partial resynchronization not accepted: Replication ID mismatch (Slave asked for '333d735fee630925f6cb26009980da7c036c1be1', my replication IDs are '0afc41e9ed8a0e316dbca9051ffa1c4bdcbab8dd' and '0000000000000000000000000000000000000000') 1838:M 16 Oct 16:45:37.551 * Starting BGSAVE for SYNC with target: disk 1838:M 16 Oct 16:45:37.553 * Background saving started by pid 2153 2153:C 16 Oct 16:45:37.899 * DB saved on disk 2153:C 16 Oct 16:45:37.902 * RDB: 8 MB of memory used by copy-on-write 1838:M 16 Oct 16:45:37.980 * Background saving terminated with success 1838:M 16 Oct 16:45:37.981 * Synchronization with slave 127.0.0.1:7003 succeeded
slave会打印这些:
1851:M 16 Oct 16:45:32.269 # configEpoch set to 4 via CLUSTER SET-CONFIG-EPOCH 1851:M 16 Oct 16:45:32.292 # IP address for this node updated to 127.0.0.1 1851:S 16 Oct 16:45:36.989 * Before turning into a slave, using my master parameters to synthesize a cached master: I may be able to synchronize with the new master with just a partial transfer. 1851:S 16 Oct 16:45:36.989 # Cluster state changed: ok 1851:S 16 Oct 16:45:37.548 * Connecting to MASTER 127.0.0.1:7000 1851:S 16 Oct 16:45:37.548 * MASTER <-> SLAVE sync started 1851:S 16 Oct 16:45:37.548 * Non blocking connect for SYNC fired the event. 1851:S 16 Oct 16:45:37.549 * Master replied to PING, replication can continue... 1851:S 16 Oct 16:45:37.550 * Trying a partial resynchronization (request 333d735fee630925f6cb26009980da7c036c1be1:1). 1851:S 16 Oct 16:45:37.900 * Full resync from master: 28607c280e263eaefb803cc88f213822187e903f:0 1851:S 16 Oct 16:45:37.900 * Discarding previously cached master state. 1851:S 16 Oct 16:45:37.981 * MASTER <-> SLAVE sync: receiving 176 bytes from master 1851:S 16 Oct 16:45:37.982 * MASTER <-> SLAVE sync: Flushing old data 1851:S 16 Oct 16:45:38.008 * MASTER <-> SLAVE sync: Loading DB in memory 1851:S 16 Oct 16:45:38.008 * MASTER <-> SLAVE sync: Finished with success 1851:S 16 Oct 16:45:38.012 * Background append only file rewriting started by pid 2156 1851:S 16 Oct 16:45:38.302 * AOF rewrite child asks to stop sending diffs. 2156:C 16 Oct 16:45:38.302 * Parent agreed to stop sending diffs. Finalizing AOF... 2156:C 16 Oct 16:45:38.302 * Concatenating 0.00 MB of AOF diff received from parent. 2156:C 16 Oct 16:45:38.303 * SYNC append only file rewrite performed 2156:C 16 Oct 16:45:38.304 * AOF rewrite: 8 MB of memory used by copy-on-write 1851:S 16 Oct 16:45:38.330 * Background AOF rewrite terminated with success 1851:S 16 Oct 16:45:38.330 * Residual parent diff successfully flushed to the rewritten AOF (0.00 MB) 1851:S 16 Oct 16:45:38.331 * Background AOF rewrite finished successfully
脚本最后会打印这个:
[OK] All 16384 slots covered
杀掉一个master
它的slave会升成master
1855:S 16 Oct 16:51:11.497 # Error condition on socket for SYNC: Connection refused 1855:S 16 Oct 16:51:11.604 # Failover election won: I'm the new master. 1855:S 16 Oct 16:51:11.605 # configEpoch set to 7 after successful failover 1855:M 16 Oct 16:51:11.605 # Setting secondary replication ID to a7bc292355042d4c2344f8a84503a418775018f0, valid up to offset: 463. New replication ID is 1ac1b3a40de43eb7012840543e9e50200a948334 1855:M 16 Oct 16:51:11.605 * Discarding previously cached master state. 1855:M 16 Oct 16:51:11.605 # Cluster state changed: ok
再启动这个master,它会变成slave
再杀掉新的master,slave又会变回成master。
5. shutdown
[root@T9 redis-4.0.11]# ./src/redis-cli -p 7000 shutdown nosave [root@T9 redis-4.0.11]# ./src/redis-cli -p 7001 shutdown nosave [root@T9 redis-4.0.11]# ./src/redis-cli -p 7002 shutdown nosave [root@T9 redis-4.0.11]# ./src/redis-cli -p 7003 shutdown nosave [root@T9 redis-4.0.11]# ./src/redis-cli -p 7004 shutdown nosave [root@T9 redis-4.0.11]# ./src/redis-cli -p 7005 shutdown nosave
使用脚本创建 redis cluster
脚本:
utils/create-cluster/create-cluster
启动:
[root@T9 create-cluster]# ./create-cluster start [root@T9 create-cluster]# ./create-cluster create [root@T9 create-cluster]# ./create-cluster stop
使用:与非cluster的区别是使用-c参数。
[root@T9 create-cluster]# ../../src/redis-cli -c -p 30001
gossip协议
https://www.jianshu.com/p/133560ef28df
其他
为什么redis cluster最小需要6个node?
redis cluster tutorial中写道:
Note that the minimal cluster that works as expected requires to contain at least three master nodes. For your first tests it is strongly suggested to start a six nodes cluster with three masters and three slaves.
用四个节点(2 master,2 slave)测试失败,错误信息如下:
[root@T9 20181012-redis]# ./redis-4.0.11/src/redis-trib.rb create --replicas 1 127.0.0.1:7000 127.0.0.1:7001 127.0.0.1:7002 127.0.0.1:7003 >>> Creating cluster *** ERROR: Invalid configuration for cluster creation. *** Redis Cluster requires at least 3 master nodes. *** This is not possible with 4 nodes and 1 replicas per node. *** At least 6 nodes are required. [root@T9 20181012-redis]#
可能是为了防止脑裂吧? https://grokbase.com/t/gg/redis-db/15cbatbypm/why-a-minimal-cluster-should-require-at-least-three-master-nodes
用sentinel也是同理最少需要三个。
在sentinel文档里看见的一句话,应该是可以简单的解释了这个问题:
Note that we will never show setups where just two Sentinels are used, since Sentinels always need to talk with the majority in order to start a failover.
另外,redis部署了集群的话,就不需要部署哨兵了。
Cluster is only for when you need sharding. If you only want replication + failover use Sentinel (though you still need 3 hosts for the Sentinels)