当前位置: 首页 > 知识库问答 >
问题:

Kafka连接被Kubernetes nodeport拒绝

皇甫才良
2023-03-14
apiVersion: v1
kind: Service
metadata:
  name: kafka
  namespace: test
  labels:
    app: kafka-test
    unit: kafka
spec:
  type: NodePort
  selector:
    app: test-app
    unit: kafka
    parentdeployment: test-kafka
  ports:
    - name: kafka
      port: 9092
      targetPort: 9092
      nodePort: 30092
      protocol: TCP
apiVersion: apps/v1
kind: Deployment
metadata:
  name: kafka
  namespace: {{ .Values.test.namespace }}
  labels:
    app: test-app
    unit: kafka
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: test-app
        unit: kafka
        parentdeployment: test-kafka
    spec:
      hostname: kafka
      subdomain: kafka
      securityContext:
        fsGroup: {{ .Values.test.groupID }}
      containers:
        - name: kafka
          image: test_kafka:{{ .Values.test.kafkaImageTag }}
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 9092
          env:
            - name: IS_KAFKA_CLUSTER
              value: 'false'
            - name: KAFKA_ZOOKEEPER_CONNECT
              value: zookeeper:2281
            - name: KAFKA_LISTENERS
              value: SSL://:9092
            - name: KAFKA_KEYSTORE_PATH
              value: /opt/kafka/conf/kafka.keystore.jks
            - name: KAFKA_TRUSTSTORE_PATH
              value: /opt/kafka/conf/kafka.truststore.jks
            - name: KAFKA_KEYSTORE_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: kafka-secret
                  key: jkskey
            - name: KAFKA_TRUSTSTORE_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: kafka-secret
                  key: jkskey
            - name: KAFKA_LOG_DIRS
              value: /opt/kafka/data
            - name: KAFKA_ADV_LISTENERS
              value: SSL://kafka:9092
            - name: KAFKA_CLIENT_AUTH
              value: none
          volumeMounts:
            - mountPath: "/opt/kafka/conf"
              name: kafka-conf-pv
            - mountPath: "/opt/kafka/data"
              name: kafka-data-pv
      volumes:
        - name: kafka-conf-pv
          persistentVolumeClaim:
            claimName: kafka-conf-pvc
        - name: kafka-data-pv
          persistentVolumeClaim:
            claimName: kafka-data-pvc
  selector:
    matchLabels:
      app: test-app
      unit: kafka
      parentdeployment: test-kafka
apiVersion: v1
kind: Service
metadata:
  name: zookeeper
  namespace: {{ .Values.test.namespace }}
  labels:
    app: test-ra
    unit: zookeeper
spec:
  type: ClusterIP
  selector:
    app: test-ra
    unit: zookeeper
    parentdeployment: test-zookeeper
  ports:
    - name: zookeeper
      port: 2281
apiVersion: apps/v1
kind: Deployment
metadata:
  name: zookeeper
  namespace: {{ .Values.test.namespace }}
  labels:
    app: test-app
    unit: zookeeper
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: test-app
        unit: zookeeper
        parentdeployment: test-zookeeper
    spec:
      hostname: zookeeper
      subdomain: zookeeper
      securityContext:
        fsGroup: {{ .Values.test.groupID }}
      containers:
        - name: zookeeper
          image: test_zookeeper:{{ .Values.test.zookeeperImageTag }}
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 2281
          env:
            - name: IS_ZOOKEEPER_CLUSTER
              value: 'false'
            - name: ZOOKEEPER_SSL_CLIENT_PORT
              value: '2281'
            - name: ZOOKEEPER_DATA_DIR
              value: /opt/zookeeper/data
            - name: ZOOKEEPER_DATA_LOG_DIR
              value: /opt/zookeeper/data/log
            - name: ZOOKEEPER_KEYSTORE_PATH
              value: /opt/zookeeper/conf/zookeeper.keystore.jks
            - name: ZOOKEEPER_KEYSTORE_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: zookeeper-secret
                  key: jkskey
            - name: ZOOKEEPER_TRUSTSTORE_PATH
              value: /opt/zookeeper/conf/zookeeper.truststore.jks
            - name: ZOOKEEPER_TRUSTSTORE_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: zookeeper-secret
                  key: jkskey
          volumeMounts:
            - mountPath: "/opt/zookeeper/data"
              name: zookeeper-data-pv
            - mountPath: "/opt/zookeeper/conf"
              name: zookeeper-conf-pv
      volumes:
        - name: zookeeper-data-pv
          persistentVolumeClaim:
            claimName: zookeeper-data-pvc
        - name: zookeeper-conf-pv
          persistentVolumeClaim:
            claimName: zookeeper-conf-pvc
  selector:
    matchLabels:
      app: test-ra
      unit: zookeeper
      parentdeployment: test-zookeeper

kubectl对kafka的描述也显示了暴露的节点

Type:                     NodePort
IP:                       10.233.1.106
Port:                     kafka  9092/TCP
TargetPort:               9092/TCP
NodePort:                 kafka  30092/TCP
Endpoints:                10.233.66.15:9092
Session Affinity:         None
External Traffic Policy:  Cluster

我有一个出版商二进制文件,将一些信息发送到Kafka。由于我有一个3节点集群部署,我使用我的主节点IP和Kafka节点端口(30092)与Kafka连接。

但是我的二进制文件正在获得拨号tcp :9092:connect:connection rejection 错误。我无法理解为什么即使在nodePort到targetPort转换成功后,它还是被拒绝。随着进一步的调试,我在kafka日志中看到了以下调试日志:

[2021-01-13 08:17:51,692] DEBUG Accepted connection from /10.233.125.0:1564 on /10.233.66.15:9092 and assigned it to processor 0, sendBufferSize [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: [102400|102400] (kafka.network.Acceptor)
[2021-01-13 08:17:51,692] DEBUG Processor 0 listening to new connection from /10.233.125.0:1564 (kafka.network.Processor)
[2021-01-13 08:17:51,702] DEBUG [SslTransportLayer channelId=10.233.66.15:9092-10.233.125.0:1564-245 key=sun.nio.ch.SelectionKeyImpl@43dc2246] SSL peer is not authenticated, returning ANONYMOUS instead (org.apache.kafka.common.network.SslTransportLayer)
[2021-01-13 08:17:51,702] DEBUG [SslTransportLayer channelId=10.233.66.15:9092-10.233.125.0:1564-245 key=sun.nio.ch.SelectionKeyImpl@43dc2246] SSL handshake completed successfully with peerHost '10.233.125.0' peerPort 1564 peerPrincipal 'User:ANONYMOUS' cipherSuite 'TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256' (org.apache.kafka.common.network.SslTransportLayer)
[2021-01-13 08:17:51,702] DEBUG [SocketServer brokerId=1001] Successfully authenticated with /10.233.125.0 (org.apache.kafka.common.network.Selector)
[2021-01-13 08:17:51,707] DEBUG [SocketServer brokerId=1001] Connection with /10.233.125.0 disconnected (org.apache.kafka.common.network.Selector)
java.io.EOFException
        at org.apache.kafka.common.network.SslTransportLayer.read(SslTransportLayer.java:614)
        at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:95)
        at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:448)
        at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:398)
        at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:678)
        at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:580)
        at org.apache.kafka.common.network.Selector.poll(Selector.java:485)
        at kafka.network.Processor.poll(SocketServer.scala:861)
        at kafka.network.Processor.run(SocketServer.scala:760)
        at java.lang.Thread.run(Thread.java:748)

共有1个答案

施权
2023-03-14

我们在Kafka的一个设置中面临着类似的问题;我们最终创建了两个k8s服务,一个使用ClusterIP进行内部通信,另一个使用NodePort进行外部通信,标签相同。

内部存取

apiVersion: v1
kind: Service
metadata:
  name: kafka-internal
  namespace: test
  labels:
    app: kafka-test
    unit: kafka
spec:
  type: NodePort
  selector:
    app: test-app
    unit: kafka
    parentdeployment: test-kafka
  ports:
    - name: kafka
      port: 9092
      protocol: TCP
  type: ClusterIP

外部接入

apiVersion: v1
kind: Service
metadata:
  name: kafka-external
  namespace: test
  labels:
    app: kafka-test
    unit: kafka
spec:
  type: NodePort
  selector:
    app: test-app
    unit: kafka
    parentdeployment: test-kafka
  ports:
    - name: kafka
      port: 9092
      targetPort: 9092
      protocol: TCP
  type: NodePort
 类似资料:
  • 问题内容: 我是 HBase 和 Hadoop的 新手。我已经完全设置了HBase并完美启动。现在,当我尝试使用Java客户端从 p1 连接到HBase(HBase安装在 p2上 )时,它抛出了一个奇怪的异常。 问题答案: 我找到了解决方案。 通过仅从我的 主机中 删除 localhost 条目。现在我的本地主机条目就像 192.169.19.50 [这是我的hbase IP] =本地主机,而不是

  • 问题内容: 我正在尝试实现TCP连接,从服务器端一切正常,但是当我运行客户端程序(从客户端计算机)时,出现以下错误: 我尝试更改套接字号以防万一,但无济于事,有谁知道导致此错误的原因和解决方法。 服务器代码: 客户代码: 问题答案: 此异常意味着你尝试连接的IP /端口上没有侦听服务: 你试图连接到错误的IP /主机或端口。 你尚未启动服务器。 你的服务器没有监听连接。 在Windows服务器上,

  • 我无法连接到RabbitMQ。RabbitMQ不在本地计算机上。 我的应用程序.属性看起来像 我可以使用https://urltologinscreen:15671访问Rabbitmq gui 我得到以下错误 如何解决问题?

  • 我试着让我的selenium测试在debian服务器上运行,但是我一直从我的GeckoDriver得到连接拒绝错误。在运行Ubuntu的本地机器上,它运行得很好。你们谁知道怎么了?我找不到任何有用的东西。 OS:Debian Jessie 构建工具:Gradle Java-版本:8 设置方法 错误信息

  • 我试图通过Jedis客户端连接到redis服务器,但在连接时我得到了以下异常和堆栈跟踪- redisconnectionFailureException:无法获得Jedis连接;嵌套异常是redis.clients.jedis.exceptions.jedisconnectionException:无法从位于org.springframework.data.redis.connection.jed

  • 我正在尝试运行卡桑德拉,但每次它在连接时都会给出相同的错误...有什么我需要在配置文件或属性文件中编辑的吗? ('无法连接到任何服务器',{'127.0.0.1:9042 ':错误(61,"尝试连接到[('127.0.0.1 ',9042)]。最后一个错误:连接被拒绝")}) 启动cassandra时出错