当前位置: 首页 > 知识库问答 >
问题:

带有微服务和Docker编写错误的Spring Cloud Stream Kafka

陈修诚
2023-03-14

我想看看我是否可以在docker容器中docker compose的帮助下连接Spring Cloud Stream Kafka,但我被卡住了,我还没有找到解决方案,请帮助我。

我正在使用Spring微服务。到现在也没找到任何帮助。

Docker-compose与Kafka和Zookeeper:

version: '2'
services:
  zookeeper:
    image: wurstmeister/zookeeper
    container_name: zookeeper
    ports:
      - "2181:2181"

  kafkaserver:
    image: wurstmeister/kafka
    container_name: kafka
    ports:
      - "9092:9092"
    environment:
      - KAFKA_ADVERTISED_HOST_NAME=192.168.99.100 #kafka
      - KAFKA_ADVERTISED_PORT=9092
      - KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181
      - KAFKA_CREATE_TOPICS=dresses:1:1, ratings:1:1
      - KAFKA_SOCKET_REQUEST_MAX_BYTES=2000000000
      - KAFKA_HEAP_OPTS=-Xmx512M -Xmx5g
    volumes:
      - "/var/run/docker.sock:/var/run/docker.sock"
    depends_on:
      - zookeeper

Docker-Compose与我的Spring服务:

version: '2'
services: 
...
 informatii:
    container_name: informatii
    build: C:\Users\marius\IdeaProjects\tototo\informatii-service
    restart: on-failure
    ports:
      - 1000:1000
    environment:
      #SPRING_CLOUD_STREAM_KAFKA_BINDER_BROKERS: 192:168:99:100:9092
      #SPRING_CLOUD_STREAM_KAFKA_BINDER_DEFAULTBROKERPORT: 9092
      #KAFKA_ADVERTISED_LISTENERS=PLAINTEXT: //kafka:9092 \
    

App.properties我的服务:

server.port=2000
spring.cloud.stream.bindings.output.destination=orgChangeTopic
spring.cloud.stream.bindings.output.content-type=application/json
spring.cloud.stream.kafka.binder.zkNodes=kafka
 # zookeeper:2181 #localhost>? zipkin
spring.cloud.stream.kafka.binder.brokers=kafka
spring.zipkin.baseUrl=http://zipkin:9411
--------
spring.cloud.stream.bindings.inboundOrgChanges.destination=orgChangeTopic
spring.cloud.stream.bindings.inboundOrgChanges.content-type=application/json
spring.cloud.stream.bindings.inboundOrgChanges.group=informatiiGroup
spring.cloud.stream.kafka.binder.brokers=kafka
spring.cloud.stream.kafka.binder.zkNodes=kafka
spring.zipkin.baseUrl=http://zipkin:9411

来自docker容器的日志:

    kafka          | waiting for kafka to be ready
    kafka          | [Configuring] 'advertised.port' in '/opt/kafka/config/server.properties'
    kafka          | Excluding KAFKA_HOME from broker config
    kafka          | [Configuring] 'advertised.host.name' in '/opt/kafka/config/server.properties'
    kafka          | [Configuring] 'port' in '/opt/kafka/config/server.properties'
    kafka          | [Configuring] 'socket.request.max.bytes' in '/opt/kafka/config/server.properties'
    kafka          | [Configuring] 'broker.id' in '/opt/kafka/config/server.properties'
    kafka          | Excluding KAFKA_VERSION from broker config
    kafka          | [Configuring] 'zookeeper.connect' in '/opt/kafka/config/server.properties'
    kafka          | [Configuring] 'log.dirs' in '/opt/kafka/config/server.properties'
    kafka          | waiting for kafka to be ready
    kafka          | waiting for kafka to be ready
    zookeeper      | ZooKeeper JMX enabled by default
    zookeeper      | Using config: /opt/zookeeper-3.4.13/bin/../conf/zoo.cfg
    zookeeper      | 2021-03-27 00:05:54,081 [myid:] - INFO  [main:QuorumPeerConfig@136] - Reading configuration from: /opt/zookeeper-3.4.13/bin/../conf/zoo.cfg
    zookeeper      | 2021-03-27 00:05:54,182 [myid:] - INFO  [main:DatadirCleanupManager@78] - autopurge.snapRetainCount set to 3
    zookeeper      | 2021-03-27 00:05:54,183 [myid:] - INFO  [main:DatadirCleanupManager@79] - autopurge.purgeInterval set to 1
    zookeeper      | 2021-03-27 00:05:54,190 [myid:] - WARN  [main:QuorumPeerMain@116] - Either no config or no quorum defined in config, running  in standalone mode
    zookeeper      | 2021-03-27 00:05:54,196 [myid:] - INFO  [PurgeTask:DatadirCleanupManager$PurgeTask@138] - Purge task started.
    zookeeper      | 2021-03-27 00:05:54,350 [myid:] - INFO  [main:QuorumPeerConfig@136] - Reading configuration from: /opt/zookeeper-3.4.13/bin/../conf/zoo.cfg
    zookeeper      | 2021-03-27 00:05:54,366 [myid:] - INFO  [main:ZooKeeperServerMain@98] - Starting server
    zookeeper      | 2021-03-27 00:05:54,409 [myid:] - INFO  [PurgeTask:DatadirCleanupManager$PurgeTask@144] - Purge task completed.

    kafka          | waiting for kafka to be ready
    kafka          | waiting for kafka to be ready
    kafka          | [2021-03-27 00:06:03,740] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$)
    kafka          | waiting for kafka to be ready
    kafka          | [2021-03-27 00:06:17,450] INFO Client environment:user.name=root (org.apache.zookeeper.ZooKeeper)
    kafka          | [2021-03-27 00:06:17,456] INFO Client environment:user.home=/root (org.apache.zookeeper.ZooKeeper)
    kafka          | [2021-03-27 00:06:17,459] INFO Client environment:user.dir=/ (org.apache.zookeeper.ZooKeeper)
    kafka          | [2021-03-27 00:06:17,464] INFO Client environment:os.memory.free=20MB (org.apache.zookeeper.ZooKeeper)
    kafka          | [2021-03-27 00:06:17,467] INFO Client environment:os.memory.max=5120MB (org.apache.zookeeper.ZooKeeper)
    kafka          | [2021-03-27 00:06:17,470] INFO Client environment:os.memory.total=32MB (org.apache.zookeeper.ZooKeeper)
    kafka          | [2021-03-27 00:06:17,534] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=18000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@333291e3 (org.apache.zookeeper.ZooKeeper)
    kafka          | [2021-03-27 00:06:17,756] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket)
    kafka          | [2021-03-27 00:06:18,005] INFO zookeeper.request.timeout value is 0. feature enabled= (org.apache.zookeeper.ClientCnxn)
    kafka          | [2021-03-27 00:06:18,053] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient)
    kafka          | [2021-03-27 00:06:18,182] INFO Opening socket connection to server zookeeper/172.24.0.3:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
    zookeeper      | 2021-03-27 00:05:55,247 [myid:] - INFO  [main:NIOServerCnxnFactory@89] - binding to port 0.0.0.0/0.0.0.0:2181
    zookeeper      | 2021-03-27 00:06:18,709 [myid:] - INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@215] - Accepted socket connection from /172.24.0.4:54606
    zookeeper      | 2021-03-27 00:06:18,796 [myid:] - INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer@949] - Client attempting to establish new session at /172.24.0.4:54606
    zookeeper      | 2021-03-27 00:06:18,945 [myid:] - INFO  [SyncThread:0:FileTxnLog@213] - Creating new log file: log.1
    kafka          | [2021-03-27 00:06:18,683] INFO Socket connection established, initiating session, client: /172.24.0.4:54606, server: zookeeper/172.24.0.3:2181 (org.apache.zookeeper.ClientCnxn)
    kafka          | [2021-03-27 00:06:19,190] INFO Session establishment complete on server zookeeper/172.24.0.3:2181, sessionid = 0x100033564dc0000, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn)
    kafka          | [2021-03-27 00:06:19,267] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient)
    zookeeper      | 2021-03-27 00:06:19,133 [myid:] - INFO  [SyncThread:0:ZooKeeperServer@694] - Established session 0x100033564dc0000 with negotiated timeout 18000 for client /172.24.0.4:54606
    zookeeper      | 2021-03-27 00:06:21,262 [myid:] - INFO  [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor@653] - Got user-level KeeperException when processing sessionid:0x100033564dc0000 type:create cxid:0x2 zxid:0x3 txntype:-1 reqpath:n/a Error Path:/brokers Error:KeeperErrorCode = NoNode fo
    r /brokers
    zookeeper      | 2021-03-27 00:06:21,326 [myid:] - INFO  [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor@653] - Got user-level KeeperException when processing sessionid:0x100033564dc0000 type:create cxid:0x6 zxid:0x7 txntype:-1 reqpath:n/a Error Path:/config Error:KeeperErrorCode = NoNode for
     /config
    kafka          | waiting for kafka to be ready
    kafka          | [2021-03-27 00:06:22,473] INFO [feature-zk-node-event-process-thread]: Starting (kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread)
    kafka          | [2021-03-27 00:06:22,684] INFO Feature ZK node at path: /feature does not exist (kafka.server.FinalizedFeatureChangeListener)
    zookeeper      | 2021-03-27 00:06:21,520 [myid:] - INFO  [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor@653] - Got user-level KeeperException when processing sessionid:0x100033564dc0000 type:create cxid:0x9 zxid:0xa txntype:-1 reqpath:n/a Error Path:/admin Error:KeeperErrorCode = NoNode for
    /admin

    kafka          | [2021-03-27 00:06:27,109] INFO KafkaConfig values:
    kafka          |        advertised.host.name = 192.168.99.100
    kafka          |        advertised.listeners = null
    kafka          |        advertised.port = 9092
   ...

    kafka          |        listener.security.protocol.map = PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
    kafka          |        listeners = null
   ...
    kafka          |  (kafka.server.KafkaConfig)
    kafka          | [2021-03-27 00:06:27,256] INFO KafkaConfig values:
    kafka          |        advertised.host.name = 192.168.99.100
    kafka          |        advertised.listeners = null
    kafka          |        advertised.port = 9092
    ...
PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
    kafka          |        listeners = null
    kafka          |        log.cleaner.backoff.ms = 15000
    kafka          |        log.cleaner.dedupe.buffer.size = 134217728
    kafka          |        log.cleaner.delete.retention.ms = 86400000
  
    kafka          | [2021-03-27 00:06:42,648] INFO Created ConnectionAcceptRate-PLAINTEXT sensor, quotaLimit=2147483647 (kafka.network.ConnectionQuotas)
    kafka          | [2021-03-27 00:06:42,675] INFO Updated PLAINTEXT max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas)
    kafka          | [2021-03-27 00:06:42,709] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.Acceptor)
    kafka          | [2021-03-27 00:06:47,696] INFO Creating /brokers/ids/1001 (is it secure? false) (kafka.zk.KafkaZkClient)
    kafka          | [2021-03-27 00:06:48,288] INFO Stat of the created znode at /brokers/ids/1001 is: 25,25,1616803608051,1616803608051,1,0,0,72061121898217472,212,0,25
    kafka          |  (kafka.zk.KafkaZkClient)
    kafka          | [2021-03-27 00:06:48,317] INFO Registered broker 1001 at path /brokers/ids/1001 with addresses: PLAINTEXT://192.168.99.100:9092, czxid (broker epoch): 25 (kafka.zk.KafkaZkClient)
    kafka          | [2021-03-27 00:06:50,511] INFO [ExpirationReaper-1001-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
    kafka          | creating topics: dresses:1:1
    kafka          | creating topics:  ratings:1:1
    kafka          | [2021-03-27 00:06:52,268] INFO [GroupCoordinator 1001]: Starting up. (kafka.coordinator.group.GroupCoordinator)
    kafka          | [2021-03-27 00:06:52,312] INFO [GroupCoordinator 1001]: Startup complete. (kafka.coordinator.group.GroupCoordinator)
  
    zookeeper      | 2021-03-27 00:06:58,423 [myid:] - INFO  [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor@596] - Got user-level KeeperException when processing sessionid:0x100033564dc0000 type:multi cxid:0x3e zxid:0x1e txntype:-1 reqpath:n/a aborting remaining multi ops. Error Path:/admin/pref
    erred_replica_election Error:KeeperErrorCode = NoNode for /admin/preferred_replica_election
    zookeeper      | 2021-03-27 00:07:18,457 [myid:] - INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@215] - Accepted socket connection from /172.24.0.4:54612
    zookeeper      | 2021-03-27 00:07:18,492 [myid:] - INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@215] - Accepted socket connection from /172.24.0.4:54614
    zookeeper      | 2021-03-27 00:07:18,521 [myid:] - INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer@949] - Client attempting to establish new session at /172.24.0.4:54612
    zookeeper      | 2021-03-27 00:07:18,534 [myid:] - INFO  [SyncThread:0:ZooKeeperServer@694] - Established session 0x100033564dc0001 with negotiated timeout 30000 for client /172.24.0.4:54612
    zookeeper      | 2021-03-27 00:07:18,566 [myid:] - INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer@949] - Client attempting to establish new session at /172.24.0.4:54614
    zookeeper      | 2021-03-27 00:07:18,577 [myid:] - INFO  [SyncThread:0:ZooKeeperServer@694] - Established session 0x100033564dc0002 with negotiated timeout 30000 for client /172.24.0.4:54614
    zookeeper      | 2021-03-27 00:07:25,621 [myid:] - INFO  [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor@653] - Got user-level KeeperException when processing sessionid:0x100033564dc0001 type:setData cxid:0x4 zxid:0x21 txntype:-1 reqpath:n/a Error Path:/config/topics/ratings Error:KeeperError
    Code = NoNode for /config/topics/ratings
    kafka          | [2021-03-27 00:06:59,948] INFO [broker-1001-to-controller-send-thread]: Recorded new controller, from now on will use broker 1001 (kafka.server.BrokerToControllerRequestThread)
    zookeeper      | 2021-03-27 00:07:25,917 [myid:] - INFO  [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor@653] - Got user-level KeeperException when processing sessionid:0x100033564dc0002 type:setData cxid:0x4 zxid:0x22 txntype:-1 reqpath:n/a Error Path:/config/topics/dresses Error:KeeperError
    Code = NoNode for /config/topics/dresses
    zookeeper      | 2021-03-27 00:07:26,569 [myid:] - INFO  [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor@487] - Processed session termination for sessionid: 0x100033564dc0001
    kafka          | Created topic ratings.
    zookeeper      | 2021-03-27 00:07:26,606 [myid:] - INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1056] - Closed socket connection for client /172.24.0.4:54612 which had sessionid 0x100033564dc0001
    zookeeper      | 2021-03-27 00:07:28,347 [myid:] - INFO  [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor@487] - Processed session termination for sessionid: 0x100033564dc0002
    kafka          | Created topic dresses.
    kafka          | [2021-03-27 00:07:31,500] INFO [ReplicaFetcherManager on broker 1001] Removed fetcher for partitions Set(ratings-0) (kafka.server.ReplicaFetcherManager)
    kafka          | [2021-03-27 00:07:46,393] INFO [Log partition=ratings-0, dir=/kafka/kafka-logs-0d39e2b35aa6] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
    kafka          | [2021-03-27 00:07:47,707] INFO Created log for partition ratings-0 in /kafka/kafka-logs-0d39e2b35aa6/ratings-0 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> [delete], flush.ms
     -> 9223372036854775807, segment.bytes -> 1073741824, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.7-IV2, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1048588, min.compaction.lag.ms -> 0, message.ti
    mestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segm
    ent.index.bytes -> 10485760}. (kafka.log.LogManager)
    kafka          | [2021-03-27 00:07:47,836] INFO [Partition ratings-0 broker=1001] No checkpointed highwatermark is found for partition ratings-0 (kafka.cluster.Partition)
    kafka          | [2021-03-27 00:07:47,932] INFO [Partition ratings-0 broker=1001] Log loaded for partition ratings-0 with initial high watermark 0 (kafka.cluster.Partition)
    kafka          | [2021-03-27 00:07:53,030] INFO [ReplicaFetcherManager on broker 1001] Removed fetcher for partitions Set(dresses-0) (kafka.server.ReplicaFetcherManager)
    kafka          | [2021-03-27 00:07:53,591] INFO [Log partition=dresses-0, dir=/kafka/kafka-logs-0d39e2b35aa6] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
    kafka          | [2021-03-27 00:07:53,864] INFO Created log for partition dresses-0 in /kafka/kafka-logs-0d39e2b35aa6/dresses-0 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> [delete], flush.ms
     -> 9223372036854775807, segment.bytes -> 1073741824, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.7-IV2, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1048588, min.compaction.lag.ms -> 0, message.ti
    mestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segm
    ent.index.bytes -> 10485760}. (kafka.log.LogManager)
    kafka          | [2021-03-27 00:07:54,058] INFO [Partition dresses-0 broker=1001] No checkpointed highwatermark is found for partition dresses-0 (kafka.cluster.Partition)
    zookeeper      | 2021-03-27 00:07:28,394 [myid:] - INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1056] - Closed socket connection for client /172.24.0.4:54614 which had sessionid 0x100033564dc0002
    zookeeper      | 2021-03-27 00:08:11,558 [myid:] - INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@215] - Accepted socket connection from /192.168.99.1:56469
    zookeeper      | 2021-03-27 00:08:11,574 [myid:] - WARN  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@383] - Exception causing close of session 0x0: Len error 1195725856
    kafka          | [2021-03-27 00:07:54,097] INFO [Partition dresses-0 broker=1001] Log loaded for partition dresses-0 with initial high watermark 0 (kafka.cluster.Partition)
    zookeeper      | 2021-03-27 00:08:11,578 [myid:] - INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1056] - Closed socket connection for client /192.168.99.1:56469 (no session established for client)
    kafka          | [2021-03-27 00:24:07,405] INFO Creating topic orgChangeTopic with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1001)) (kafka.zk.AdminZkClient)
    kafka          | [2021-03-27 00:24:08,389] INFO [ReplicaFetcherManager on broker 1001] Removed fetcher for partitions Set(orgChangeTopic-0) (kafka.server.ReplicaFetcherManager)
    kafka          | [2021-03-27 00:24:08,494] INFO [Log partition=orgChangeTopic-0, dir=/kafka/kafka-logs-0d39e2b35aa6] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
    kafka          | [2021-03-27 00:24:08,584] INFO Created log for partition orgChangeTopic-0 in /kafka/kafka-logs-0d39e2b35aa6/orgChangeTopic-0 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> [del
    ete], flush.ms -> 9223372036854775807, segment.bytes -> 1073741824, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.7-IV2, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1048588, min.compaction.lag.ms ->
     0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 92233720368
    54775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager)
    kafka          | [2021-03-27 00:24:08,617] INFO [Partition orgChangeTopic-0 broker=1001] No checkpointed highwatermark is found for partition orgChangeTopic-0 (kafka.cluster.Partition)

我的服务的Docker日志…:

   informatii    | 2021-03-27 00:23:29.604  INFO 1 --- [           main] o.s.s.c.ThreadPoolTaskScheduler          : Initializing ExecutorService 'taskScheduler'
    informatii    | 2021-03-27 00:23:31.037  INFO 1 --- [           main] onConfiguration$FunctionBindingRegistrar : Functional binding is disabled due to the presense of @EnableBinding annotation in your configuration
    informatii    | 2021-03-27 00:23:39.353  INFO 1 --- [           main] o.s.c.s.m.DirectWithAttributesChannel    : Channel 'informatii-service-1.input' has 1 subscriber(s).
    informatii    | 2021-03-27 00:23:39.448  INFO 1 --- [           main] o.s.i.endpoint.EventDrivenConsumer       : Adding {logging-channel-adapter:_org.springframework.integration.errorLogger} as a subscriber to the 'errorChannel' channel
    informatii    | 2021-03-27 00:23:39.457  INFO 1 --- [           main] o.s.i.channel.PublishSubscribeChannel    : Channel 'informatii-service-1.errorChannel' has 1 subscriber(s).
    informatii    | 2021-03-27 00:23:39.462  INFO 1 --- [           main] o.s.i.endpoint.EventDrivenConsumer       : started bean '_org.springframework.integration.errorLogger'
    
    informatii    | 2021-03-27 00:24:20.088  INFO 1 --- [           main] o.s.c.s.binder.DefaultBinderFactory      : Creating binder: kafka
    divizie       | 2021-03-27 00:24:22.519  INFO 1 --- [           main] c.n.d.provider.DiscoveryJerseyProvider   : Using JSON decoding codec LegacyJacksonJson
    divizie       | 2021-03-27 00:24:29.323  INFO 1 --- [           main] c.n.d.provider.DiscoveryJerseyProvider   : Using XML encoding codec XStreamXml
    informatii    | 2021-03-27 00:24:27.450  INFO 1 --- [           main] b.c.PropertySourceBootstrapConfiguration : Located property source: [BootstrapPropertySource {name='bootstrapProperties-configClient'}, BootstrapPropertySource {name='bootstrapProperties-https://github.
    nformatii-service.yml'}, BootstrapPropertySource {name='bootstrapProperties-https://github.com/hideyourname/kubes.git/application.yml'}]
    informatii    | 2021-03-27 00:24:37.060  INFO 1 --- [           main] o.s.c.s.binder.DefaultBinderFactory      : Caching the binder: kafka
    informatii    | 2021-03-27 00:24:37.079  INFO 1 --- [           main] o.s.c.s.binder.DefaultBinderFactory      : Retrieving cached binder: kafka
 

    informatii    | 2021-03-27 00:44:45.335  WARN 1 --- [ask-scheduler-3] org.apache.kafka.clients.ClientUtils     : Couldn't resolve server 192:168:99:100:9092 from bootstrap.servers as DNS resolution failed for 192:168:99:100
    gateway       | 2021-03-27 00:40:00.105  INFO 1 --- [trap-executor-0] c.n.d.s.r.aws.ConfigClusterResolver      : Resolving eureka endpoints via configuration
    server        | 2021-03-27 00:44:42.136  INFO 1 --- [a-EvictionTimer] c.n.e.registry.AbstractInstanceRegistry  : Running the evict task with compensationTime 0ms
    informatii    | 2021-03-27 00:44:45.343 ERROR 1 --- [ask-scheduler-3] o.s.cloud.stream.binding.BindingService  : Failed to create consumer binding; retrying in 30 seconds
    informatii    |
    informatii    | org.springframework.cloud.stream.binder.BinderException: Exception thrown while starting consumer:
    informatii    |         at org.springframework.cloud.stream.binder.AbstractMessageChannelBinder.doBindConsumer(AbstractMessageChannelBinder.java:462) ~[spring-cloud-stream-3.0.11.RELEASE.jar!/:3.0.11.RELEASE]
    informatii    |         at org.springframework.cloud.stream.binder.AbstractMessageChannelBinder.doBindConsumer(AbstractMessageChannelBinder.java:91) ~[spring-cloud-stream-3.0.11.RELEASE.jar!/:3.0.11.RELEASE]
    informatii    |         at org.springframework.cloud.stream.binder.AbstractBinder.bindConsumer(AbstractBinder.java:143) ~[spring-cloud-stream-3.0.11.RELEASE.jar!/:3.0.11.RELEASE]
    informatii    |         at org.springframework.cloud.stream.binding.BindingService.lambda$rescheduleConsumerBinding$1(BindingService.java:201) ~[spring-cloud-stream-3.0.11.RELEASE.jar!/:3.0.11.RELEASE]
    informatii    |         at org.springframework.scheduling.support.DelegatingErrorHandlingRunnable.run(DelegatingErrorHandlingRunnable.java:54) ~[spring-context-5.2.13.RELEASE.jar!/:5.2.13.RELEASE]
    informatii    |         at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) ~[na:na]
    informatii    |         at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[na:na]
    informatii    |         at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304) ~[na:na]
    informatii    |         at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[na:na]
    informatii    |         at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[na:na]
    informatii    |         at java.base/java.lang.Thread.run(Thread.java:835) ~[na:na]
    informatii    | Caused by: org.apache.kafka.common.KafkaException: Failed to create new KafkaAdminClient
    informatii    |         at org.apache.kafka.clients.admin.KafkaAdminClient.createInternal(KafkaAdminClient.java:479) ~[kafka-clients-2.5.1.jar!/:na]
    informatii    |         at org.apache.kafka.clients.admin.Admin.create(Admin.java:71) ~[kafka-clients-2.5.1.jar!/:na]
    informatii    |         at org.apache.kafka.clients.admin.AdminClient.create(AdminClient.java:49) ~[kafka-clients-2.5.1.jar!/:na]
    informatii    |         at org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner.createAdminClient(KafkaTopicProvisioner.java:259) ~[spring-cloud-stream-binder-kafka-core-3.0.11.RELEASE.jar!/:3.0.11.
    informatii    |         at org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner.doProvisionConsumerDestination(KafkaTopicProvisioner.java:229) ~[spring-cloud-stream-binder-kafka-core-3.0.11.RELEASE.
    informatii    |         at org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner.provisionConsumerDestination(KafkaTopicProvisioner.java:196) ~[spring-cloud-stream-binder-kafka-core-3.0.11.RELEASE.ja
    informatii    |         at org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner.provisionConsumerDestination(KafkaTopicProvisioner.java:86) ~[spring-cloud-stream-binder-kafka-core-3.0.11.RELEASE.jar
    informatii    |         at org.springframework.cloud.stream.binder.AbstractMessageChannelBinder.doBindConsumer(AbstractMessageChannelBinder.java:403) ~[spring-cloud-stream-3.0.11.RELEASE.jar!/:3.0.11.RELEASE]
    informatii    |         ... 10 common frames omitted
    informatii    | Caused by: org.apache.kafka.common.config.ConfigException: No resolvable bootstrap urls given in bootstrap.servers
    informatii    |         at org.apache.kafka.clients.ClientUtils.parseAndValidateAddresses(ClientUtils.java:89) ~[kafka-clients-2.5.1.jar!/:na]
    informatii    |         at org.apache.kafka.clients.ClientUtils.parseAndValidateAddresses(ClientUtils.java:48) ~[kafka-clients-2.5.1.jar!/:na]
    informatii    |         at org.apache.kafka.clients.admin.KafkaAdminClient.createInternal(KafkaAdminClient.java:439) ~[kafka-clients-2.5.1.jar!/:na]
    informatii    |         ... 17 common frames omitted
    informatii    |
    informatii    | 2021-03-27 00:45:15.354  INFO 1 --- [ask-scheduler-6] o.a.k.clients.admin.AdminClientConfig    : AdminClientConfig values:
    informatii    |         bootstrap.servers = [192:168:99:100:9092]
    informatii    |         client.dns.lookup = default
    informatii    |         client.id =
...
    informatii    | 2021-03-27 00:45:15.364  WARN 1 --- [ask-scheduler-6] org.apache.kafka.clients.ClientUtils     : Couldn't resolve server 192:168:99:100:9092 from bootstrap.servers as DNS resolution failed for 192:168:99:100

共有1个答案

景成和
2023-03-14

您需要改回

KAFKA_ADVERTISED_HOST_NAME=kafka

然后,为您的Zookeeper、Kafka和您的应用程序使用相同的撰写文件,以确保它们位于同一网络中。

此外,如果您计划使用该属性,zkNodes地址应该指向Zookeeper容器。

 类似资料:
  • 寻找如何对使用Spring Boot构建的微服务进行发布管理的建议。 我参与过的大多数项目都使用发布插件(maven)来创建标签以及发布maven项目(jar,war,rpm)。通常,这依赖于发布过程中所有子项目(jars,wars)的maven父/子关系(单体源代码,全部存在于单个git存储库中)。我想知道人们如何维护不同的引导项目(微服务)并发布。 在我看来,以下是可能的策略: 每个git存储

  • 我有两个微服务“前端”和“/用户”。当客户端向“前端”发出请求时,它们会与REST API进行通信,它会在内部请求“/用户”微服务。 当 /users回答500错误时,什么样的状态代码应该返回前端服务?

  • 我正在使用Apache Tomee7.0.2微配置文件,并试图了解更多关于微服务的信息。其中一个教程链接可以在https://www.javacodegeeks.com/2017/03/microservices-series-microprofile-apache-tomee.html上找到。 如有任何帮助,不胜感激。 谢谢。

  • 编排微服务的标准模式是什么? 如果一个微服务只知道它自己的领域,但是有一个数据流需要多个服务以某种方式交互,那该怎么做呢? 假设我们有这样的东西: null 在某个地方,有人按下中的一个按钮,“我完成了,让我们这么做吧!”在一个典型的整体服务体系结构中,我认为有一个来处理这个问题,或者装运服务了解发票服务并直接调用发票服务。 但在这个美丽的微服务新世界里,人们是如何处理这件事的呢? 我确实知道这可

  • 创建一个 HTTP 服务器 最简单的方法来创建一个 HTTP 服务器,所有选项使用默认的。如下所示: HttpServer server = vertx.createHttpServer(); 配置 HTTP 服务器 如果你不想使用默认值,创建服务器时可以通过传入一个HttpServerOptions实例配置: HttpServerOptions options = new HttpServerO

  • 创建 TCP 服务器 使用最简单的方法来创建一个 TCP 服务器,使用所有默认选项如下所示: NetServer server = vertx.createNetServer(); 配置 TCP 服务器 如果你不想默认值,可以将服务器配置通过传入一个NetServerOptions实例来创建它: NetServerOptions options = new NetServerOptions().s