当前位置: 首页 > 知识库问答 >
问题:

如何部署Strimzi KafkaMirrorMaker

申昌勋
2023-03-14

我正在使用 strimzi 运算符并在 k8s 上运行 kafka 集群。我想使用 Kafka Mirror Maker,我使用 CRD yml 部署了 Kafka Mirror Maker,但我的 KMM pod 处于崩溃环回状态。我不明白问题是什么?这是我的Kafka MirrorMaker yml

apiVersion: kafka.strimzi.io/v1beta1
kind: KafkaMirrorMaker
metadata:
  name: my-mirror-maker
spec:
  version: 2.6.0
  replicas: 1
  consumer:
    bootstrapServers: my-cluster-kafka-bootstrap:9092
    groupId: my-source-group-id
  producer:
    bootstrapServers: my-cluster2-kafka-bootstrap:9092
  whitelist: ".*"

还有我的Kafka集群yml:

apiVersion: kafka.strimzi.io/v1beta1
kind: Kafka
metadata:
  name: my-cluster
spec:
  kafka:
    version: 2.6.0
    replicas: 3
    listeners:
      - name: plain
        port: 9092
        type: internal
        tls: false
      - name: tls
        port: 9093
        type: internal
        tls: true
    config:
      offsets.topic.replication.factor: 3
      transaction.state.log.replication.factor: 3
      transaction.state.log.min.isr: 2
      log.message.format.version: "2.6"
    storage:
      type: jbod
      volumes:
      - id: 0
        type: persistent-claim
        size: 100Gi
        deleteClaim: false
  zookeeper:
    replicas: 3
    storage:
      type: persistent-claim
      size: 100Gi
      deleteClaim: false
  entityOperator:
    topicOperator: {}
    userOperator: {}

我的第二个Kafka集群:

apiVersion: kafka.strimzi.io/v1beta1
kind: Kafka
metadata:
  name: my-cluster2
spec:
  kafka:
    version: 2.6.0
    replicas: 3
    listeners:
      - name: plain
        port: 9092
        type: internal
        tls: false
      - name: tls
        port: 9093
        type: internal
        tls: true
    config:
      offsets.topic.replication.factor: 3
      transaction.state.log.replication.factor: 3
      transaction.state.log.min.isr: 2
      log.message.format.version: "2.6"
    storage:
      type: jbod
      volumes:
      - id: 0
        type: persistent-claim
        size: 100Gi
        deleteClaim: false
  zookeeper:
    replicas: 3
    storage:
      type: persistent-claim
      size: 100Gi
      deleteClaim: false
  entityOperator:
    topicOperator: {}
    userOperator: {}

我的窗格列表及其状态:

    strimzi       my-bridge-bridge-684df9fc64-d7gqg              1/1     Running   2          10m
strimzi       my-cluster-entity-operator-7b546bddfd-4622z    3/3     Running   0          6m51s
strimzi       my-cluster-kafka-0                             1/1     Running   0          9m26s
strimzi       my-cluster-kafka-1                             1/1     Running   2          9m26s
strimzi       my-cluster-kafka-2                             1/1     Running   2          9m26s
strimzi       my-cluster-zookeeper-0                         1/1     Running   0          10m
strimzi       my-cluster-zookeeper-1                         1/1     Running   1          10m
strimzi       my-cluster-zookeeper-2                         1/1     Running   0          10m
strimzi       my-cluster2-entity-operator-74f6f4dbc4-7jhvh   3/3     Running   0          7m52s
strimzi       my-cluster2-kafka-0                            1/1     Running   0          9m39s
strimzi       my-cluster2-kafka-1                            1/1     Running   0          9m39s
strimzi       my-cluster2-kafka-2                            1/1     Running   0          9m39s
strimzi       my-cluster2-zookeeper-0                        1/1     Running   0          10m
strimzi       my-cluster2-zookeeper-1                        1/1     Running   0          10m
strimzi       my-cluster2-zookeeper-2                        1/1     Running   0          10m
strimzi       my-connect-cluster-connect-6cdb6cd79d-qlnhg    1/1     Running   4          10m
strimzi       strimzi-cluster-operator-54ff55979f-sxrzq      1/1     Running   0          11m

吊舱日志:

 ^Cist@ist-1207:~kubectl logs -f my-mirror-maker-mirror-maker-78544b8c8-rz5ms -n strimzi
    Kafka Mirror Maker consumer configuration:
    # Bootstrap servers
    bootstrap.servers=my-cluster-kafka-bootstrap:9092
    # Consumer group
    group.id=my-source-group-id
    # Provided configuration



security.protocol=PLAINTEXT




Kafka Mirror Maker producer configuration:
# Bootstrap servers
bootstrap.servers=my-cluster2-cluster-kafka-bootstrap:9092
# Provided configuration


security.protocol=PLAINTEXT




2020-11-20 11:41:38,990 INFO Starting readiness poller (io.strimzi.mirrormaker.agent.MirrorMakerAgent) [main]
2020-11-20 11:41:39,176 INFO Starting liveness poller (io.strimzi.mirrormaker.agent.MirrorMakerAgent) [main]
2020-11-20 11:41:39,604 INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$) [main]
2020-11-20 11:41:40,128 INFO Starting mirror maker (kafka.tools.MirrorMaker$) [main]
WARNING: The default partition assignment strategy of the mirror maker will change from 'range' to 'roundrobin' in an upcoming release (so that better load balancing can be achieved). If you prefer to make this switch in advance of that release add the following to the corresponding config: 'partition.assignment.strategy=org.apache.kafka.clients.consumer.RoundRobinAssignor'
2020-11-20 11:41:40,301 INFO ProducerConfig values: 
    acks = -1
    batch.size = 16384
    bootstrap.servers = [my-cluster2-cluster-kafka-bootstrap:9092]
    buffer.memory = 33554432
    client.dns.lookup = use_all_dns_ips
    client.id = producer-1
    compression.type = none
    connections.max.idle.ms = 540000
    delivery.timeout.ms = 2147483647
    enable.idempotence = false
    interceptor.classes = []
    internal.auto.downgrade.txn.commit = false
    key.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
    linger.ms = 0
    max.block.ms = 9223372036854775807
    max.in.flight.requests.per.connection = 1
    max.request.size = 1048576
    metadata.max.age.ms = 300000
    metadata.max.idle.ms = 300000
    metric.reporters = []
    metrics.num.samples = 2
    metrics.recording.level = INFO
    metrics.sample.window.ms = 30000
    partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
    receive.buffer.bytes = 32768
    reconnect.backoff.max.ms = 1000
    reconnect.backoff.ms = 50
    request.timeout.ms = 30000
    retries = 2147483647
    retry.backoff.ms = 100
    sasl.client.callback.handler.class = null
    sasl.jaas.config = null
    sasl.kerberos.kinit.cmd = /usr/bin/kinit
    sasl.kerberos.min.time.before.relogin = 60000
    sasl.kerberos.service.name = null
    sasl.kerberos.ticket.renew.jitter = 0.05
    sasl.kerberos.ticket.renew.window.factor = 0.8
    sasl.login.callback.handler.class = null
    sasl.login.class = null
    sasl.login.refresh.buffer.seconds = 300
    sasl.login.refresh.min.period.seconds = 60
    sasl.login.refresh.window.factor = 0.8
    sasl.login.refresh.window.jitter = 0.05
    sasl.mechanism = GSSAPI
    security.protocol = PLAINTEXT
    security.providers = null
    send.buffer.bytes = 131072
    ssl.cipher.suites = null
    ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
    ssl.endpoint.identification.algorithm = https
    ssl.engine.factory.class = null
    ssl.key.password = null
    ssl.keymanager.algorithm = SunX509
    ssl.keystore.location = null
    ssl.keystore.password = null
    ssl.keystore.type = JKS
    ssl.protocol = TLSv1.3
    ssl.provider = null
    ssl.secure.random.implementation = null
    ssl.trustmanager.algorithm = PKIX
    ssl.truststore.location = null
    ssl.truststore.password = null
    ssl.truststore.type = JKS
    transaction.timeout.ms = 60000
    transactional.id = null
    value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
 (org.apache.kafka.clients.producer.ProducerConfig) [main]
2020-11-20 11:41:40,392 WARN Couldn't resolve server my-cluster2-cluster-kafka-bootstrap:9092 from bootstrap.servers as DNS resolution failed for my-cluster2-cluster-kafka-bootstrap (org.apache.kafka.clients.ClientUtils) [main]
2020-11-20 11:41:40,393 INFO [Producer clientId=producer-1] Closing the Kafka producer with timeoutMillis = 0 ms. (org.apache.kafka.clients.producer.KafkaProducer) [main]
2020-11-20 11:41:40,400 ERROR Exception when starting mirror maker. (kafka.tools.MirrorMaker$) [main]
org.apache.kafka.common.KafkaException: Failed to construct kafka producer
    at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:441)
    at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:301)
    at kafka.tools.MirrorMaker$MirrorMakerProducer.<init>(MirrorMaker.scala:370)
    at kafka.tools.MirrorMaker$MirrorMakerOptions.checkArgs(MirrorMaker.scala:536)
    at kafka.tools.MirrorMaker$.main(MirrorMaker.scala:87)
    at kafka.tools.MirrorMaker.main(MirrorMaker.scala)
Caused by: org.apache.kafka.common.config.ConfigException: No resolvable bootstrap urls given in bootstrap.servers
    at org.apache.kafka.clients.ClientUtils.parseAndValidateAddresses(ClientUtils.java:89)
    at org.apache.kafka.clients.ClientUtils.parseAndValidateAddresses(ClientUtils.java:48)
    at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:415)
    ... 5 more
Exception in thread "main" java.lang.NullPointerException
    at kafka.tools.MirrorMaker$.main(MirrorMaker.scala:94)
    at kafka.tools.MirrorMaker.main(MirrorMaker.scala)
2020-11-20 11:41:40,410 INFO Start clean shutdown. (kafka.tools.MirrorMaker$) [MirrorMakerShutdownHook]
2020-11-20 11:41:40,413 INFO Shutting down consumer threads. (kafka.tools.MirrorMaker$) [MirrorMakerShutdownHook]
2020-11-20 11:41:40,413 INFO Closing producer. (kafka.tools.MirrorMaker$) [MirrorMakerShutdownHook]
2020-11-20 11:41:40,414 ERROR Uncaught exception in thread 'MirrorMakerShutdownHook': (org.apache.kafka.common.utils.KafkaThread) [MirrorMakerShutdownHook]
java.lang.NullPointerException
    at kafka.tools.MirrorMaker$.cleanShutdown(MirrorMaker.scala:172)
    at kafka.tools.MirrorMaker$MirrorMakerOptions.$anonfun$checkArgs$2(MirrorMaker.scala:522)
    at kafka.utils.Exit$.$anonfun$addShutdownHook$1(Exit.scala:38)
    at java.base/java.lang.Thread.run(Thread.java:834)

这是我的kafka集群的svc:

NAMESPACE     NAME                             TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE
default       kubernetes                       ClusterIP   10.96.0.1        <none>        443/TCP                      24m
kube-system   kube-dns                         ClusterIP   10.96.0.10       <none>        53/UDP,53/TCP,9153/TCP       24m
strimzi       my-bridge-bridge-service         ClusterIP   10.108.118.142   <none>        8080/TCP                     11m
strimzi       my-cluster-kafka-bootstrap       ClusterIP   10.109.128.192   <none>        9091/TCP,9092/TCP,9093/TCP   10m
strimzi       my-cluster-kafka-brokers         ClusterIP   None             <none>        9091/TCP,9092/TCP,9093/TCP   10m
strimzi       my-cluster-zookeeper-client      ClusterIP   10.110.172.185   <none>        2181/TCP                     11m
strimzi       my-cluster-zookeeper-nodes       ClusterIP   None             <none>        2181/TCP,2888/TCP,3888/TCP   11m
strimzi       my-cluster2-kafka-bootstrap      ClusterIP   10.105.92.74     <none>        9091/TCP,9092/TCP,9093/TCP   10m
strimzi       my-cluster2-kafka-brokers        ClusterIP   None             <none>        9091/TCP,9092/TCP,9093/TCP   10m
strimzi       my-cluster2-zookeeper-client     ClusterIP   10.98.76.46      <none>        2181/TCP                     11m
strimzi       my-cluster2-zookeeper-nodes      ClusterIP   None             <none>        2181/TCP,2888/TCP,3888/TCP   11m
strimzi       my-connect-cluster-connect-api   ClusterIP   10.101.136.97    <none>        8083/TCP                     11m

共有1个答案

娄阳舒
2023-03-14

当然,MM需要运行两个集群来镜像它们,我只看到了一个名为< code>my-cluster的集群,而在MM资源中,您正在镜像两个名为< code>my-source-cluster和< code>my-target-cluster的集群,如引导服务器中所述。您目前拥有的唯一引导服务器是< code > my-cluster-Kafka-bootstrap ,无论如何,一个集群不足以进行镜像。

 类似资料:
  • 提示 GatewayWorker提供的所有接口都是支持分布式调用的,所以业务代码不需要任何更改,直接就可以分布式部署。 如何分布式GatewayWorker GatewayWorker通过Register服务来建立划分集群。同一集群使用相同的Register服务ip和端口,即Gateway 和 businessWorker的注册服务地址($gateway->registerAddress $bus

  • Nuxt.js 允许你将静态化后的站点部署至任何静态站点托管服务中,例如 GitHub Pages。 部署至 GitHub Pages,首先需要将应用静态化: npm run generate 上述的命令会生成一个 dist 目录,该目录包含了待部署的所有资源文件。 如果是项目站点,可以将 dist 的内容提交至项目的 gh-pages 分支;如果是用户(github.com/user/user.

  • 问题内容: 我正在使用Eclipse服务器功能进行热代码部署。使用tomcat作为Web服务器。但是我不确定它是如何工作的。我有自己的理解,它必须如何在内部工作。 我的理解:- 当开发人员更改代码(例如,类Employee)时,Eclipse将在正确的位置(必须是特定的Web /应用服务器,将其称为热部署目录(HDD))放置/发布修改后的已编译类。 。现在将显示Web服务器特定的类加载器。它在HD

  • 问题内容: 在尝试将新的portlet安装到在Glassfish域上运行的Liferay时遇到了问题。我最近很清楚流程,也想与他人分享。所以,请阅读答案,希望这对某人有帮助:) (此问题的先决条件是,我知道添加在 Glassfish 上运行的项目是通过autodeploy文件夹完成的,但是使它们在 Liferay上 可见则是另一回事了。) 问题答案: 因此,您首先创建一个.war文件,让我们说一个

  • 我正在开发一个基于nodejs和postgresql的corona病毒项目。我正在尝试在heroku上部署我的应用程序,但我遇到了一些问题。这些问题来自postgres配置(我认为)。 我有一个. env文件,其中我初始化了一些变量(DATABASE_URL,...),但我不知道如何将它们传输到heroku。我关联了一个插件postgres-heroku,它会生成一个具有随机值的DATABASE_

  • 我得到一个错误,当我试图部署一个未经修改的版本的最新的Git桶源heroku. ! 无法使用sbt生成应用程序 ! 推送被拒绝,未能编译Scala应用程序 git@heroku.com:xxxxxx.git![远程拒绝]主- 一切都在本地构建和运行,没有问题。http://gitbucket.herokuapp.com/有一个演示应用程序,所以我知道这是可能的。我怎样才能绕过这个错误? [编辑]我