这个Redis集群有240个节点(120个主节点和120个从节点),并且可以长期工作。但现在它得到一个主从开关几乎几个小时。
我从Redis服务器得到一些日志。
5c541d3a765e087af7775ba308f51ffb2aa54151 10.12.28.165:6502 13306:M 08 Mar 18:55:02.597 * Background append only file rewriting started by pid 15396 13306:M 08 Mar 18:55:41.636 # Cluster state changed: fail 13306:M 08 Mar 18:55:45.321 # Connection with slave client id #112948 lost. 13306:M 08 Mar 18:55:46.243 # Configuration change detected. Reconfiguring myself as a replica of afb6e012db58bd26a7c96182b04f0a2ba6a45768 13306:S 08 Mar 18:55:47.134 * AOF rewrite child asks to stop sending diffs. 15396:C 08 Mar 18:55:47.134 * Parent agreed to stop sending diffs. Finalizing AOF... 15396:C 08 Mar 18:55:47.134 * Concatenating 0.02 MB of AOF diff received from parent. 15396:C 08 Mar 18:55:47.135 * SYNC append only file rewrite performed 15396:C 08 Mar 18:55:47.186 * AOF rewrite: 4067 MB of memory used by copy-on-write 13306:S 08 Mar 18:55:47.209 # Cluster state changed: ok
5ac747878f881349aa6a62b179176ddf603e034c 10.12.30.107:6500 22825:M 08 Mar 18:55:30.534 * FAIL message received from da493af5bb3d15fc563961de09567a47787881be about 5c541d3a765e087af7775ba308f51ffb2aa54151 22825:M 08 Mar 18:55:31.440 # Failover auth granted to afb6e012db58bd26a7c96182b04f0a2ba6a45768 for epoch 323 22825:M 08 Mar 18:55:41.587 * Background append only file rewriting started by pid 23628 22825:M 08 Mar 18:56:24.200 # Cluster state changed: fail 22825:M 08 Mar 18:56:30.002 # Connection with slave client id #382416 lost. 22825:M 08 Mar 18:56:30.830 * FAIL message received from 0decbe940c6f4d4330fae5a9c129f1ad4932405d about 5ac747878f881349aa6a62b179176ddf603e034c 22825:M 08 Mar 18:56:30.840 # Failover auth denied to d46f95da06cfcd8ea5eaa15efabff5bd5e99df55: its master is up 22825:M 08 Mar 18:56:30.843 # Configuration change detected. Reconfiguring myself as a replica of d46f95da06cfcd8ea5eaa15efabff5bd5e99df55 22825:S 08 Mar 18:56:31.030 * Clear FAIL state for node 5ac747878f881349aa6a62b179176ddf603e034c: slave is reachable again. 22825:S 08 Mar 18:56:31.030 * Clear FAIL state for node 5c541d3a765e087af7775ba308f51ffb2aa54151: slave is reachable again. 22825:S 08 Mar 18:56:31.294 # Cluster state changed: ok 22825:S 08 Mar 18:56:31.595 * Connecting to MASTER 10.12.30.104:6404 22825:S 08 Mar 18:56:31.671 * MASTER SLAVE sync started 22825:S 08 Mar 18:56:31.671 * Non blocking connect for SYNC fired the event. 22825:S 08 Mar 18:56:31.672 * Master replied to PING, replication can continue... 22825:S 08 Mar 18:56:31.673 * Partial resynchronization not possible (no cached master) 22825:S 08 Mar 18:56:31.691 * AOF rewrite child asks to stop sending diffs.
下面是这个集群的配置。
daemonize no tcp-backlog 511 timeout 0 tcp-keepalive 60 loglevel notice databases 16 dir "/var/cachecloud/data" stop-writes-on-bgsave-error no repl-timeout 60 repl-ping-slave-period 10 repl-disable-tcp-nodelay no repl-backlog-size 10000000 repl-backlog-ttl 7200 slave-serve-stale-data yes slave-read-only yes slave-priority 100 lua-time-limit 5000 slowlog-log-slower-than 10000 slowlog-max-len 128 hash-max-ziplist-entries 512 hash-max-ziplist-value 64 list-max-ziplist-entries 512 list-max-ziplist-value 64 set-max-intset-entries 512 zset-max-ziplist-entries 128 zset-max-ziplist-value 64 activerehashing yes client-output-buffer-limit normal 0 0 0 client-output-buffer-limit slave 512mb 128mb 60 client-output-buffer-limit pubsub 32mb 8mb 60 hz 10 port 6401 maxmemory 13000mb maxmemory-policy volatile-lru appendonly yes appendfsync no appendfilename "appendonly-6401.aof" dbfilename "dump-6401.rdb" aof-rewrite-incremental-fsync yes no-appendfsync-on-rewrite yes auto-aof-rewrite-min-size 62500kb auto-aof-rewrite-percentage 86 rdbcompression yes rdbchecksum yes repl-diskless-sync no repl-diskless-sync-delay 5 maxclients 10000 hll-sparse-max-bytes 3000 min-slaves-to-write 0 min-slaves-max-lag 10 aof-load-truncated yes notify-keyspace-events "" bind 10.12.26.226 protected-mode no cluster-enabled yes cluster-node-timeout 15000 cluster-slave-validity-factor 10 cluster-migration-barrier 1 cluster-config-file "nodes-6401.conf" cluster-require-full-coverage no rename-command FLUSHDB "" rename-command FLUSHALL "" rename-command KEYS ""
在我的选项中,aof重写不会影响Redis主线程。但这似乎使该节点无法响应其他节点的ping。
在Linux内核参数上检查THP(透明大页面)。因为AOF差异大小为0.02MB,所以写时复制大小为2067MB。
我在联合mod-RDB AOF中工作<我正在寻找一种从RDB文件重新启动后加载的方法-主要用于快速重启<除此之外,我想继续编写AOF<一旦我知道发生了灾难,我会手动从AOF加载<这是我当前的配置:(我知道appendonly yes是说AOF将在重启后加载,我正在从RDB中寻找等待加载并继续写入AOF。) 谢谢
我们有一个使用默认配置运行但启用了AOF的Redis服务器。出于性能原因,我们希望禁用AOF,而只使用RDB。 如果我们使用重新启动Redis实例,所有键都将丢失。谢天谢地,使用再次重新启动会将我们的数据返回给我们。 redis文档展示了如何从RDB迁移到AOF,但是从AOF迁移到RDB的正确方法是什么?
我正在读一些关于redis持久性的文章,现在有两种方法可以保持redis持久性。 AOF RDB 好的,我将跳过“AOF”和“RDB”的基本含义,我有一个关于“AOF”的问题,我的问题是“当redis的aof文件太大时会发生什么,即使是在重新编写之后?”,我在谷歌上搜索过,但没有结果,有人说当“AOF”文件的大小达到3G或4G时,redis-server将无法启动。有人能告诉我吗?非常感谢。
主要内容:开启AOF持久化,AOF持久化机制,AOF策略配置,AOF和RDB对比AOF 被称为追加模式,或日志模式,是 Redis 提供的另一种持久化策略,它能够存储 Redis 服务器已经执行过的的命令,并且只记录对内存有过修改的命令,这种数据记录方法,被叫做“增量复制”,其默认存储文件为 。 开启AOF持久化 AOF 机制默认处于未开启状态,可以通过修改 Redis 配置文件开启 AOF,如下所示: 1) Windows系统 执行如下操作: 2) Linux系统 执行如下
在redis 4.0中,讨论了一个新的混合RDB AOF,在redis更改日志中输入链接描述: 混合RDB-AOF格式。如果启用,则在重写AOF文件时使用新格式:重写使用更紧凑、更快的格式生成RDB格式,并将AOF流附加到文件中。这允许在使用AOF持久性时更快地重写和重新加载 我想知道如何在redis配置文件中设置此配置?
这个问题是关于Redis持久性的。 我正在使用redis作为社交网站的“快速后端”。这是一个单一的服务器设置。我一直在将PostgreSQL的职责稳步转移到Redis。目前在中,附加设置设置设置为。快照设置为,,。所有这些对于生产和开发都是如此。根据生产日志,被大量调用。这是否意味着实际上,我每60秒就会得到一次备份? 一些文献建议同时使用AOF和RDB备份。因此,我正在权衡是否打开appendo