我在生产中的压缩策略是LZ4压缩。但我将其修改为Deflate
对于压缩更改,我们必须使用nodetool升级表来强制升级所有表上的压缩策略
但是,一旦在集群中的所有5个节点上完成升级命令,我的请求开始失败,包括读取和写入
问题被追踪到5个节点集群中的一个特定节点以及该节点上的一个特殊表。我的整个集群具有大致相同数量的数据和配置,但有一个节点特别出现故障是行为不端的
< code >节点工具状态的输出
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns (effective) Host ID Rack
UN xx.xxx.xx.xxx 283.94 GiB 256 40.4% 24950207-5fbc-4ea6-92aa-d09f37e83a1c rack1
UN xx.xxx.xx.xxx 280.55 GiB 256 39.9% 4ecdf7f8-a4d8-4a94-a930-1a87a80ae510 rack1
UN xx.xxx.xx.xxx 284.61 GiB 256 40.5% de2ada08-264b-421a-961f-5fd113f28208 rack1
UN YY.YYY.YY.YYY 280.44 GiB 256 40.2% 68c7c130-6cf8-4864-bde8-1819f238045c rack2
UN xx.xxx.xx.xxx 273.71 GiB 256 39.0% 6c080e47-ffb2-4fbc-bc7e-73df19103d2a rack2
YY以上。YYY。YY。YYY
节点出现错误
集群配置
该特定表的 Nodetool 表统计信息
显示高丢弃的突变
SSTable count: 11
Space used (live): 9.82 GiB
Space used (total): 9.82 GiB
Space used by snapshots (total): 0 bytes
Off heap memory used (total): 26.77 MiB
SSTable Compression Ratio: 0.1840953951763564
Number of keys (estimate): 15448921
Memtable cell count: 8558
Memtable data size: 5.89 MiB
Memtable off heap memory used: 0 bytes
Memtable switch count: 5
Local read count: 67792
Local read latency: 92.314 ms
Local write count: 31336
Local write latency: 0.067 ms
Pending flushes: 0
Percent repaired: 21.18
Bloom filter false positives: 1
Bloom filter false ratio: 0.00794
Bloom filter space used: 22.2 MiB
Bloom filter off heap memory used: 18.45 MiB
Index summary off heap memory used: 3.24 MiB
Compression metadata off heap memory used: 5.08 MiB
Compacted partition minimum bytes: 87
Compacted partition maximum bytes: 943127
Compacted partition mean bytes: 3058
Average live cells per slice (last five minutes): 1.0
Maximum live cells per slice (last five minutes): 1
Average tombstones per slice (last five minutes): 1.0
Maximum tombstones per slice (last five minutes): 1
Dropped Mutations: 4.13 KiB
nodetool info
显示
Gossip active : true
Thrift active : false
Native Transport active: true
Load : 280.43 GiB
Generation No : 1514537104
Uptime (seconds) : 8810363
Heap Memory (MB) : 1252.06 / 3970.00
Off Heap Memory (MB) : 573.33
Data Center : dc1
Rack : rack1
Exceptions : 18987
Key Cache : entries 351612, size 99.86 MiB, capacity 100 MiB, 11144584 hits, 21126425 requests, 0.528 recent hit rate, 14400 save period in seconds
在5个节点中,一个特定节点具有较高的丢弃突变“大约560Kb”号并读取,即使该节点与另一个节点具有相同的配置并拥有等量的数据。
我们曾试图修复该节点,但这并没有降低丢弃的突变,并且请求不断失败。
我们在那个节点上重新启动了cassandra服务,但是丢失的突变仍然继续增加。
系统日志
ERROR [ReadRepairStage:10229] 2018-04-11 16:02:12,954 CassandraDaemon.java:229 - Exception in thread Thread[ReadRepairStage:10229,5,main]
org.apache.cassandra.exceptions.ReadTimeoutException: Operation timed out - received only 0 responses.
at org.apache.cassandra.service.DataResolver$RepairMergeListener.close(DataResolver.java:171) ~[apache-cassandra-3.10.jar:3.10]
at org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$2.close(UnfilteredPartitionIterators.java:182) ~[apache-cassandra-3.10.jar:3.10]
at org.apache.cassandra.db.transform.BaseIterator.close(BaseIterator.java:82) ~[apache-cassandra-3.10.jar:3.10]
at org.apache.cassandra.service.DataResolver.compareResponses(DataResolver.java:89) ~[apache-cassandra-3.10.jar:3.10]
at org.apache.cassandra.service.AsyncRepairCallback$1.runMayThrow(AsyncRepairCallback.java:50) ~[apache-cassandra-3.10.jar:3.10]
at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) ~[apache-cassandra-3.10.jar:3.10]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[na:1.8.0_144]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[na:1.8.0_144]
at org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$0(NamedThreadFactory.java:79) ~[apache-cassandra-3.10.jar:3.10]
at java.lang.Thread.run(Thread.java:748) ~[na:1.8.0_144]
ERROR [ReadRepairStage:10231] 2018-04-11 16:02:17,551 CassandraDaemon.java:229 - Exception in thread Thread[ReadRepairStage:10231,5,main]
org.apache.cassandra.exceptions.ReadTimeoutException: Operation timed out - received only 0 responses.
at org.apache.cassandra.service.DataResolver$RepairMergeListener.close(DataResolver.java:171) ~[apache-cassandra-3.10.jar:3.10]
at org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$2.close(UnfilteredPartitionIterators.java:182) ~[apache-cassandra-3.10.jar:3.10]
at org.apache.cassandra.db.transform.BaseIterator.close(BaseIterator.java:82) ~[apache-cassandra-3.10.jar:3.10]
at org.apache.cassandra.service.DataResolver.compareResponses(DataResolver.java:89) ~[apache-cassandra-3.10.jar:3.10]
at org.apache.cassandra.service.AsyncRepairCallback$1.runMayThrow(AsyncRepairCallback.java:50) ~[apache-cassandra-3.10.jar:3.10]
at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) ~[apache-cassandra-3.10.jar:3.10]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[na:1.8.0_144]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[na:1.8.0_144]
at org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$0(NamedThreadFactory.java:79) ~[apache-cassandra-3.10.jar:3.10]
at java.lang.Thread.run(Thread.java:748) ~[na:1.8.0_144]
ERROR [ReadRepairStage:10232] 2018-04-11 16:02:22,221 CassandraDaemon.java:229 - Exception in thread Thread[ReadRepairStage:10232,5,main]
org.apache.cassandra.exceptions.ReadTimeoutException: Operation timed out - received only 0 responses.
at org.apache.cassandra.service.DataResolver$RepairMergeListener.close(DataResolver.java:171) ~[apache-cassandra-3.10.jar:3.10]
at org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$2.close(UnfilteredPartitionIterators.java:182) ~[apache-cassandra-3.10.jar:3.10]
at org.apache.cassandra.db.transform.BaseIterator.close(BaseIterator.java:82) ~[apache-cassandra-3.10.jar:3.10]
at org.apache.cassandra.service.DataResolver.compareResponses(DataResolver.java:89) ~[apache-cassandra-3.10.jar:3.10]
at org.apache.cassandra.service.AsyncRepairCallback$1.runMayThrow(AsyncRepairCallback.java:50) ~[apache-cassandra-3.10.jar:3.10]
at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) ~[apache-cassandra-3.10.jar:3.10]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[na:1.8.0_144]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[na:1.8.0_144]
at org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$0(NamedThreadFactory.java:79) ~[apache-cassandra-3.10.jar:3.10]
at java.lang.Thread.run(Thread.java:748) ~[na:1.8.0_144]
调试日志
DEBUG [ReadRepairStage:161301] 2018-04-11 01:45:01,432 DataResolver.java:169 - Timeout while read-repairing after receiving all 1 data and digest responses
ERROR [ReadRepairStage:161301] 2018-04-11 01:45:01,432 CassandraDaemon.java:229 - Exception in thread Thread[ReadRepairStage:161301,5,main]
org.apache.cassandra.exceptions.ReadTimeoutException: Operation timed out - received only 0 responses.
at org.apache.cassandra.service.DataResolver$RepairMergeListener.close(DataResolver.java:171) ~[apache-cassandra-3.10.jar:3.10]
at org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$2.close(UnfilteredPartitionIterators.java:182) ~[apache-cassandra-3.10.jar:3.10]
at org.apache.cassandra.db.transform.BaseIterator.close(BaseIterator.java:82) ~[apache-cassandra-3.10.jar:3.10]
at org.apache.cassandra.service.DataResolver.compareResponses(DataResolver.java:89) ~[apache-cassandra-3.10.jar:3.10]
at org.apache.cassandra.service.AsyncRepairCallback$1.runMayThrow(AsyncRepairCallback.java:50) ~[apache-cassandra-3.10.jar:3.10]
at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) ~[apache-cassandra-3.10.jar:3.10]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[na:1.8.0_144]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[na:1.8.0_144]
at org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$0(NamedThreadFactory.java:79) ~[apache-cassandra-3.10.jar:3.10]
at java.lang.Thread.run(Thread.java:748) ~[na:1.8.0_144]
DEBUG [ReadRepairStage:161304] 2018-04-11 01:45:02,692 ReadCallback.java:242 - Digest mismatch:
org.apache.cassandra.service.DigestMismatchException: Mismatch for key DecoratedKey(-4042387324575455696, 229229902e5a43588d52466b8063b557) (d41d8cd98f00b204e9800998ecf8427e vs 4662dce3dcb05114ed670fbc40291d53)
at org.apache.cassandra.service.DigestResolver.compareResponses(DigestResolver.java:92) ~[apache-cassandra-3.10.jar:3.10]
at org.apache.cassandra.service.ReadCallback$AsyncRepairRunner.run(ReadCallback.java:233) ~[apache-cassandra-3.10.jar:3.10]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [na:1.8.0_144]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [na:1.8.0_144]
at org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$0(NamedThreadFactory.java:79) [apache-cassandra-3.10.jar:3.10]
at java.lang.Thread.run(Thread.java:748) ~[na:1.8.0_144]
DEBUG [GossipStage:1] 2018-04-11 01:45:02,958 FailureDetector.java:457 - Ignoring interval time of 2000158817 for /xx.xxx.xx.xxx
WARN [PERIODIC-COMMIT-LOG-SYNCER] 2018-04-11 01:45:04,665 NoSpamLogger.java:94 - Out of 1 commit log syncs over the past 0.00s with average duration of 180655.05ms, 1 have exceeded the configured commit interval by an average of 170655.05ms
DEBUG [ReadRepairStage:161303] 2018-04-11 01:45:04,693 DataResolver.java:169 - Timeout while read-repairing after receiving all 1 data and digest responses
ERROR [ReadRepairStage:161303] 2018-04-11 01:45:04,709 CassandraDaemon.java:229 - Exception in thread Thread[ReadRepairStage:161303,5,main]
org.apache.cassandra.exceptions.ReadTimeoutException: Operation timed out - received only 0 responses.
at org.apache.cassandra.service.DataResolver$RepairMergeListener.close(DataResolver.java:171) ~[apache-cassandra-3.10.jar:3.10]
at org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$2.close(UnfilteredPartitionIterators.java:182) ~[apache-cassandra-3.10.jar:3.10]
at org.apache.cassandra.db.transform.BaseIterator.close(BaseIterator.java:82) ~[apache-cassandra-3.10.jar:3.10]
at org.apache.cassandra.service.DataResolver.compareResponses(DataResolver.java:89) ~[apache-cassandra-3.10.jar:3.10]
at org.apache.cassandra.service.AsyncRepairCallback$1.runMayThrow(AsyncRepairCallback.java:50) ~[apache-cassandra-3.10.jar:3.10]
at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) ~[apache-cassandra-3.10.jar:3.10]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[na:1.8.0_144]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[na:1.8.0_144]
at org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$0(NamedThreadFactory.java:79) ~[apache-cassandra-3.10.jar:3.10]
at java.lang.Thread.run(Thread.java:748) ~[na:1.8.0_144]
INFO [ScheduledTasks:1] 2018-04-11 01:45:07,353 MessagingService.java:1214 - MUTATION messages were dropped in last 5000 ms: 87 internal and 77 cross node. Mean internal dropped latency: 89509 ms and Mean cross-node dropped latency: 95871 ms
INFO [ScheduledTasks:1] 2018-04-11 01:45:07,354 MessagingService.java:1214 - HINT messages were dropped in last 5000 ms: 0 internal and 93 cross node. Mean internal dropped latency: 0 ms and Mean cross-node dropped latency: 86440 ms
INFO [ScheduledTasks:1] 2018-04-11 01:45:07,354 MessagingService.java:1214 - READ_REPAIR messages were dropped in last 5000 ms: 0 internal and 72 cross node. Mean internal dropped latency: 0 ms and Mean cross-node dropped latency: 73159 ms
希望有人能帮我这个忙。
更新:
<code>将此节点的堆大小更新为9GB后,Nodetool info</code>。
ID : 68c7c130-6cf8-4864-bde8-1819f238045c
Gossip active : true
Thrift active : false
Native Transport active: true
Load : 279.32 GiB
Generation No : 1523504294
Uptime (seconds) : 9918
Heap Memory (MB) : 5856.73 / 9136.00
Off Heap Memory (MB) : 569.67
Data Center : dc1
Rack : rack2
Exceptions : 862
Key Cache : entries 3650, size 294.83 KiB, capacity 100 MiB, 8112 hits, 22015 requests, 0.368 recent hit rate, 14400 save period in seconds
Row Cache : entries 0, size 0 bytes, capacity 0 bytes, 0 hits, 0 requests, NaN recent hit rate, 0 save period in seconds
Counter Cache : entries 0, size 0 bytes, capacity 50 MiB, 0 hits, 0 requests, NaN recent hit rate, 7200 save period in seconds
Chunk Cache : entries 7680, size 480 MiB, capacity 480 MiB, 1282773 misses, 1292444 requests, 0.007 recent hit rate, 3797.874 microseconds miss latency
Percent Repaired : 6.190888093280888%
Token : (invoke with -T/--tokens to see all 256 tokens)
1)您现在是3.10,您应该强烈考虑3.11.2。3.11.2中修复了许多关键错误
2)如果您有一个节点行为不端,并且RF=3,那么您可能对该节点的处理方式与其他节点不同。可能是您的应用程序只连接到一个主机,而协调成本压倒了它,或者由于一些错误配置,您可能在其上拥有不成比例的数据量(看起来您的RF=3带有2个机架,因此很可能它没有按您预期的方式分布)。
我们自己也面临这个问题,我们通过从集群中删除节点解决了这个问题(作为最后的手段)(我们认为存在某种未知的硬件故障或类似的内存泄漏)
我们建议您使用 nodetool 删除节点
而不是 nodetool 取消操作
来删除节点,因为我们不想从故障节点流式传输数据,而是从其副本之一流式传输数据。(这是一个安全的检查,并避免将损坏的数据流式传输到其他节点的可能性。
我们删除节点后,集群运行状况恢复正常,正常运行。
我试图使用Kubernetes,Jenkins和我的私有SVN存储库实现CI/CD管道。我计划使用Kubernetes集群,它有3台主机器和15台工作机器/节点。并使用Jenkins部署使用spring Boot开发的微服务。那么当我正在使用Jenkins进行部署时,我如何定义哪个微服务需要部署在kubernetes集群中的哪个节点?。我需要在Pod中指定吗?或者其他定义?
我在一个集群中有两个节点;我允许用户有节点特定的配置,如日志级别,本地缓存设置等;有时,管理这些设置变得非常困难,因为用户必须知道或记住应用在特定节点上的配置--在找到该特定节点之前移动一个又一个节点;是否有任何标准或已知的方法可以从单个地方管理这些节点?比如,从httpd服务器本身还是将一个节点作为主节点并记住其他节点?
主要内容:高可扩展性,刚性结构,快速线性规模的性能,容错,灵活的数据存储,简单的数据分发,事务支持,快速写入有很多优秀的技术特点使Cassandra非常受欢迎。 以下是Cassandra的一些热门特性/功能的列表: 高可扩展性 Cassandra具有高度的可扩展性,可以帮助您可随时添加更多硬件,以便根据需求附加更多客户和更多数据。 刚性结构 Cassandra没有一个单一的故障点,它可用于无法承受故障的关键业务应用程序。 快速线性规模的性能 Cassandra线性可扩展。它可以提高吞吐量,因为它
问题内容: 我正在尝试提高将数据写入Redis集群的性能。我们正计划从redi-sentinel转换为集群模式以实现可伸缩性。 但是,与redis-sentinel相比,写操作的性能要差得多。我们在redis-sentinel中利用了管道,但是集群模式不支持管道。 因此,我正在考虑将所有进入同一节点的密钥归为一组,然后使用管道将批次发送到该特定节点。 因此,我想知道如何(在写入集群之前)将特定密钥
本文档介绍用 3 台服务器构建 Seafile 高可用集群的架构。这里介绍的架构仅能实现“服务高可用”,而不能支持通过扩展更多的节点来提升服务性能。如果您需要“可扩展 + 高可用”的方案,请参考Seafile 可扩展集群文档。 在这种高可用架构中包含3个主要的系统部件: Seafile 服务器:提供 Seafile 服务的软件 MariaDB 数据库集群:保存小部分的 Seafile 元数据,比如
我正在使用带有datastax驱动程序的Cassandra数据库。我需要从Cassandra批量读取2000行的内容。我的用例是,我在请求中获取id列表,这些id是我在Cassandra中的分区键。我想知道生成2000个线程并从Cassandra并行获取数据是否是个好主意(在这种情况下,读取数据将有效,因为它只到达一个节点),或者是否有可能找到一种方法来对存在于同一节点中的id进行分组,以便我可以