当前位置: 首页 > 知识库问答 >
问题:

使用者只从复制因子为3且分区为1的3代理配置中的特定代理读取

阎涵忍
2023-03-14

我已经在一台机器上运行的容器集群中启动了多代理配置的所有组件。我使用了https://archive.apache.org/dist/kafka/2.0.0/kafka2.11-2.0.0.tgz中的shell脚本

  1. 使用Zookeeper-properties启动zookeeper
  2. 用3个不同的服务器属性启动了3个代理。它们仅在以下配置值上有所不同
broker.id
log.dirs
port
bin/kafka-topics.sh --create --topic repl_topic --zookeeper localhost:2181 --replication-factor 3 --partitions 1
bin/kafka-console-producer.sh --topic repl_topic --broker-list localhost:9092,localhost:9093,localhost:9094
bin/kafka-console-consumer.sh --bootstrap-server localhost:9092,localhost:9093,localhost:9094 -topic repl_topic --from-beginning

使用者只依赖于代理0。它不与其他人互动。

为什么?

共有1个答案

琴献
2023-03-14

我终于解决了这个问题,我已经解决了这个问题。检查__consumer_offsets,我注意到它没有被复制。

 bin/kafka-topics.sh --topic __consumer_offsets --zookeeper localhost:2181 --describe
Topic:__consumer_offsets    PartitionCount:50   ReplicationFactor:1 Configs:segment.bytes=104857600,cleanup.policy=compact,compression.type=producer
    Topic: __consumer_offsets   Partition: 0    Leader: 0   Replicas: 0 Isr: 0
    Topic: __consumer_offsets   Partition: 1    Leader: 0   Replicas: 0 Isr: 0
    Topic: __consumer_offsets   Partition: 2    Leader: 0   Replicas: 0 Isr: 0
    Topic: __consumer_offsets   Partition: 3    Leader: 0   Replicas: 0 Isr: 0
    Topic: __consumer_offsets   Partition: 4    Leader: 0   Replicas: 0 Isr: 0
    Topic: __consumer_offsets   Partition: 5    Leader: 0   Replicas: 0 Isr: 0
    Topic: __consumer_offsets   Partition: 6    Leader: 0   Replicas: 0 Isr: 0
    Topic: __consumer_offsets   Partition: 7    Leader: 0   Replicas: 0 Isr: 0
    Topic: __consumer_offsets   Partition: 8    Leader: 0   Replicas: 0 Isr: 0
    Topic: __consumer_offsets   Partition: 9    Leader: 0   Replicas: 0 Isr: 0
    Topic: __consumer_offsets   Partition: 10   Leader: 0   Replicas: 0 Isr: 0
    Topic: __consumer_offsets   Partition: 11   Leader: 0   Replicas: 0 Isr: 0
    Topic: __consumer_offsets   Partition: 12   Leader: 0   Replicas: 0 Isr: 0
    Topic: __consumer_offsets   Partition: 13   Leader: 0   Replicas: 0 Isr: 0
    Topic: __consumer_offsets   Partition: 14   Leader: 0   Replicas: 0 Isr: 0
    Topic: __consumer_offsets   Partition: 15   Leader: 0   Replicas: 0 Isr: 0
    Topic: __consumer_offsets   Partition: 16   Leader: 0   Replicas: 0 Isr: 0
    Topic: __consumer_offsets   Partition: 17   Leader: 0   Replicas: 0 Isr: 0
    Topic: __consumer_offsets   Partition: 18   Leader: 0   Replicas: 0 Isr: 0
    Topic: __consumer_offsets   Partition: 19   Leader: 0   Replicas: 0 Isr: 0
    Topic: __consumer_offsets   Partition: 20   Leader: 0   Replicas: 0 Isr: 0
    Topic: __consumer_offsets   Partition: 21   Leader: 0   Replicas: 0 Isr: 0
    Topic: __consumer_offsets   Partition: 22   Leader: 0   Replicas: 0 Isr: 0
    Topic: __consumer_offsets   Partition: 23   Leader: 0   Replicas: 0 Isr: 0
    Topic: __consumer_offsets   Partition: 24   Leader: 0   Replicas: 0 Isr: 0
    Topic: __consumer_offsets   Partition: 25   Leader: 0   Replicas: 0 Isr: 0
    Topic: __consumer_offsets   Partition: 26   Leader: 0   Replicas: 0 Isr: 0
    Topic: __consumer_offsets   Partition: 27   Leader: 0   Replicas: 0 Isr: 0
    Topic: __consumer_offsets   Partition: 28   Leader: 0   Replicas: 0 Isr: 0
    Topic: __consumer_offsets   Partition: 29   Leader: 0   Replicas: 0 Isr: 0
    Topic: __consumer_offsets   Partition: 30   Leader: 0   Replicas: 0 Isr: 0
    Topic: __consumer_offsets   Partition: 31   Leader: 0   Replicas: 0 Isr: 0
    Topic: __consumer_offsets   Partition: 32   Leader: 0   Replicas: 0 Isr: 0
    Topic: __consumer_offsets   Partition: 33   Leader: 0   Replicas: 0 Isr: 0
    Topic: __consumer_offsets   Partition: 34   Leader: 0   Replicas: 0 Isr: 0
    Topic: __consumer_offsets   Partition: 35   Leader: 0   Replicas: 0 Isr: 0
    Topic: __consumer_offsets   Partition: 36   Leader: 0   Replicas: 0 Isr: 0
    Topic: __consumer_offsets   Partition: 37   Leader: 0   Replicas: 0 Isr: 0
    Topic: __consumer_offsets   Partition: 38   Leader: 0   Replicas: 0 Isr: 0
    Topic: __consumer_offsets   Partition: 39   Leader: 0   Replicas: 0 Isr: 0
    Topic: __consumer_offsets   Partition: 40   Leader: 0   Replicas: 0 Isr: 0
    Topic: __consumer_offsets   Partition: 41   Leader: 0   Replicas: 0 Isr: 0
    Topic: __consumer_offsets   Partition: 42   Leader: 0   Replicas: 0 Isr: 0
    Topic: __consumer_offsets   Partition: 43   Leader: 0   Replicas: 0 Isr: 0
    Topic: __consumer_offsets   Partition: 44   Leader: 0   Replicas: 0 Isr: 0
    Topic: __consumer_offsets   Partition: 45   Leader: 0   Replicas: 0 Isr: 0
    Topic: __consumer_offsets   Partition: 46   Leader: 0   Replicas: 0 Isr: 0
    Topic: __consumer_offsets   Partition: 47   Leader: 0   Replicas: 0 Isr: 0
    Topic: __consumer_offsets   Partition: 48   Leader: 0   Replicas: 0 Isr: 0
    Topic: __consumer_offsets   Partition: 49   Leader: 0   Replicas: 0 Isr: 0

实际上,我第一次启动使用者是针对replication_factor等于1的主题。在那段时间里,消费者只创建了一个副本。该主题不再更新,然后如果Broker0关闭,其他broker将无法看到它。

可以通过命令bin/kafka-reassign-partitions.sh--zookeeper localhost:2181--reassignment-json-file new_reassignment.json--execute为每个分区建立新的复制级别

{"version":1,"partitions":[
{"topic":"__consumer_offsets","partition":0,"replicas":[0,1,2],"log_dirs":["any","any","any"]},
{"topic":"__consumer_offsets","partition":1,"replicas":[0,1,2],"log_dirs":["any","any","any"]},
{"topic":"__consumer_offsets","partition":2,"replicas":[0,1,2],"log_dirs":["any","any","any"]},
{"topic":"__consumer_offsets","partition":3,"replicas":[0,1,2],"log_dirs":["any","any","any"]},
{"topic":"__consumer_offsets","partition":4,"replicas":[0,1,2],"log_dirs":["any","any","any"]},
{"topic":"__consumer_offsets","partition":5,"replicas":[0,1,2],"log_dirs":["any","any","any"]},
{"topic":"__consumer_offsets","partition":6,"replicas":[0,1,2],"log_dirs":["any","any","any"]},
{"topic":"__consumer_offsets","partition":7,"replicas":[0,1,2],"log_dirs":["any","any","any"]},
{"topic":"__consumer_offsets","partition":8,"replicas":[0,1,2],"log_dirs":["any","any","any"]},
{"topic":"__consumer_offsets","partition":9,"replicas":[0,1,2],"log_dirs":["any","any","any"]},
{"topic":"__consumer_offsets","partition":10,"replicas":[0,1,2],"log_dirs":["any","any","any"]},
{"topic":"__consumer_offsets","partition":11,"replicas":[0,1,2],"log_dirs":["any","any","any"]},
{"topic":"__consumer_offsets","partition":12,"replicas":[0,1,2],"log_dirs":["any","any","any"]},
{"topic":"__consumer_offsets","partition":13,"replicas":[0,1,2],"log_dirs":["any","any","any"]},
{"topic":"__consumer_offsets","partition":14,"replicas":[0,1,2],"log_dirs":["any","any","any"]},
{"topic":"__consumer_offsets","partition":15,"replicas":[0,1,2],"log_dirs":["any","any","any"]},
{"topic":"__consumer_offsets","partition":16,"replicas":[0,1,2],"log_dirs":["any","any","any"]},
{"topic":"__consumer_offsets","partition":17,"replicas":[0,1,2],"log_dirs":["any","any","any"]},
{"topic":"__consumer_offsets","partition":18,"replicas":[0,1,2],"log_dirs":["any","any","any"]},
{"topic":"__consumer_offsets","partition":19,"replicas":[0,1,2],"log_dirs":["any","any","any"]},
{"topic":"__consumer_offsets","partition":20,"replicas":[0,1,2],"log_dirs":["any","any","any"]},
{"topic":"__consumer_offsets","partition":21,"replicas":[0,1,2],"log_dirs":["any","any","any"]},
{"topic":"__consumer_offsets","partition":22,"replicas":[0,1,2],"log_dirs":["any","any","any"]},
{"topic":"__consumer_offsets","partition":23,"replicas":[0,1,2],"log_dirs":["any","any","any"]},
{"topic":"__consumer_offsets","partition":24,"replicas":[0,1,2],"log_dirs":["any","any","any"]},
{"topic":"__consumer_offsets","partition":25,"replicas":[0,1,2],"log_dirs":["any","any","any"]},
{"topic":"__consumer_offsets","partition":26,"replicas":[0,1,2],"log_dirs":["any","any","any"]},
{"topic":"__consumer_offsets","partition":27,"replicas":[0,1,2],"log_dirs":["any","any","any"]},
{"topic":"__consumer_offsets","partition":28,"replicas":[0,1,2],"log_dirs":["any","any","any"]},
{"topic":"__consumer_offsets","partition":29,"replicas":[0,1,2],"log_dirs":["any","any","any"]},
{"topic":"__consumer_offsets","partition":30,"replicas":[0,1,2],"log_dirs":["any","any","any"]},
{"topic":"__consumer_offsets","partition":31,"replicas":[0,1,2],"log_dirs":["any","any","any"]},
{"topic":"__consumer_offsets","partition":32,"replicas":[0,1,2],"log_dirs":["any","any","any"]},
{"topic":"__consumer_offsets","partition":33,"replicas":[0,1,2],"log_dirs":["any","any","any"]},
{"topic":"__consumer_offsets","partition":34,"replicas":[0,1,2],"log_dirs":["any","any","any"]},
{"topic":"__consumer_offsets","partition":35,"replicas":[0,1,2],"log_dirs":["any","any","any"]},
{"topic":"__consumer_offsets","partition":36,"replicas":[0,1,2],"log_dirs":["any","any","any"]},
{"topic":"__consumer_offsets","partition":37,"replicas":[0,1,2],"log_dirs":["any","any","any"]},
{"topic":"__consumer_offsets","partition":38,"replicas":[0,1,2],"log_dirs":["any","any","any"]},
{"topic":"__consumer_offsets","partition":39,"replicas":[0,1,2],"log_dirs":["any","any","any"]},
{"topic":"__consumer_offsets","partition":40,"replicas":[0,1,2],"log_dirs":["any","any","any"]},
{"topic":"__consumer_offsets","partition":41,"replicas":[0,1,2],"log_dirs":["any","any","any"]},
{"topic":"__consumer_offsets","partition":42,"replicas":[0,1,2],"log_dirs":["any","any","any"]},
{"topic":"__consumer_offsets","partition":43,"replicas":[0,1,2],"log_dirs":["any","any","any"]},
{"topic":"__consumer_offsets","partition":44,"replicas":[0,1,2],"log_dirs":["any","any","any"]},
{"topic":"__consumer_offsets","partition":45,"replicas":[0,1,2],"log_dirs":["any","any","any"]},
{"topic":"__consumer_offsets","partition":46,"replicas":[0,1,2],"log_dirs":["any","any","any"]},
{"topic":"__consumer_offsets","partition":47,"replicas":[0,1,2],"log_dirs":["any","any","any"]},
{"topic":"__consumer_offsets","partition":48,"replicas":[0,1,2],"log_dirs":["any","any","any"]},
{"topic":"__consumer_offsets","partition":49,"replicas":[0,1,2],"log_dirs":["any","any","any"]}]}

本主题中的分区数默认设置为50。使用者触发其创建。主题主要由使用者用于提交每个主题的接收消息:Partion。如果这个__consumer_offsets主题对其他代理不可见,代理将无法通知使用者使用新消息。

 类似资料:
  • 我将从描述我正在工作的架构体系开始。它包含多个代理服务器,这些服务器使用负载均衡器将用户身份验证转发到直接绑定到活动目录的适当代理。身份验证使用用于登录请求来自的计算机的凭据和源IP。服务器将IP和凭据缓存60分钟。我正在使用专门用于此过程的测试帐户,并且仅在单元测试服务器上使用。 我正在使用docker容器在远程服务器上实现selenium webdriver的自动化。我使用python作为脚本

  • 我目前正在研究一个模型,其中代理移动到地理信息系统地图上海洋中的随机点。然而,我希望它们的路径不会与地图上的任何岛屿发生碰撞。我正在考虑使用岛屿的周长创建地理信息系统区域,并希望为地理信息系统区域提供一些访问限制选项。然而,这似乎还不是一个功能。 有人知道如何让代理在向GIS地图上的某个点移动时避免进入某些区域吗?谢谢

  • 我想为SOCKS5代理设置列表,即应使用直接连接的主机名列表。 正如oracle文档所描述的,有名为<code>http的选项。非代理主机和用于为HTTP和FTP设置代理排除,但没有针对SOCKS代理的特定设置。 我尝试了,但这不影响SOCKS连接。 SOCKS代理通过以下方式设置: 但这会导致连到的DB连接都使用SOCKS代理,这是不可接受的 这应该如何使用?如何从代理连接中排除某些主机?

  • 此行动图的输入如下: 一个参数将3个滑块的输出相加,以确定代理的总数n 另一个参数计算Agents.class==“a”占代理总数n的百分比 第三个参数计算agents.class=“b”占剩余n个agents.class==“a”的百分比(n-Agents.class==“a”) 我还不允许张贴照片,但请看这里的行动图。

  • 我正在开发一个模型来评估动物种群控制的不同干预措施。我试图使用自定义分发来分配代理在创建或稍后添加到总体时应该处于的状态。 我做了以下工作 我创建了一个选项列表,列出了模型中的所有状态 我为初始人口创建了一个自定义分布,使用选项列表为每个州分配代理的初始人口分布3我为移民创建了第二个自定义分布(因为代理将在整个模拟过程中定期添加到人口中),使用相同的选项列表但具有不同的分布值。 对于第一次超时转换

  • 我们需要将消息从一个ActiveMQ代理复制到另一个代理。这里消息必须只是复制,并且消息应该存在于两个代理中。 我可以想到一个自定义应用程序,它订阅某个目标并读取该消息并将消息重新发布到多个代理中的目标。 我没有权限在经纪人中进行更改,所以我想不出经纪人网络选项。 是否有任何最佳实践或工具可用于将A-MQ消息从一个代理复制到另一个代理?