当前位置: 首页 > 知识库问答 >
问题:

部署在同一kubernetes命名空间中的Apache Ignite节点不加入同一集群

计均
2023-03-14

部署为Pod的Apache ignite节点使用TcpDiscoveryKubernetesIpFinder发现彼此,但无法通信,因此不加入同一个集群。

INFO  [org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi] (ServerService Thread Pool -- 5) Successfully bound communication NIO server to TCP port [port=47100, locHost=0.0.0.0/0.0.0.0, selectorsCnt=4, selectorSpins=0, pairedConn=false]
DEBUG [org.apache.ignite.internal.managers.communication.GridIoManager] (ServerService Thread Pool -- 5) Starting SPI: TcpCommunicationSpi [connectGate=null, connPlc=org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi$FirstConnectionPolicy@48ca2359, enableForcibleNodeKill=false, enableTroubleshootingLog=false, locAddr=null, locHost=0.0.0.0/0.0.0.0, locPort=47100, locPortRange=100, shmemPort=-1, directBuf=true, directSndBuf=false, idleConnTimeout=600000, connTimeout=5000, maxConnTimeout=600000, reconCnt=10, sockSndBuf=32768, sockRcvBuf=32768, msgQueueLimit=0, slowClientQueueLimit=0, nioSrvr=GridNioServer [selectorSpins=0, filterChain=FilterChain[filters=[GridNioCodecFilter [parser=org.apache.ignite.internal.util.nio.GridDirectParser@30a29315, directMode=true], GridConnectionBytesVerifyFilter], closed=false, directBuf=true, tcpNoDelay=true, sockSndBuf=32768, sockRcvBuf=32768, writeTimeout=2000, idleTimeout=600000, skipWrite=false, skipRead=false, locAddr=0.0.0.0/0.0.0.0:47100, order=LITTLE_ENDIAN, sndQueueLimit=0, directMode=true, sslFilter=null, msgQueueLsnr=null, readerMoveCnt=0, writerMoveCnt=0, readWriteSelectorsAssign=false], shmemSrv=null, usePairedConnections=false, connectionsPerNode=1, tcpNoDelay=true, filterReachableAddresses=false, ackSndThreshold=32, unackedMsgsBufSize=0, sockWriteTimeout=2000, boundTcpPort=47100, boundTcpShmemPort=-1, selectorsCnt=4, selectorSpins=0, addrRslvr=null, ctxInitLatch=java.util.concurrent.CountDownLatch@4186e275[Count = 1], stopping=false]
DEBUG [org.apache.ignite.internal.managers.communication.GridIoManager] (ServerService Thread Pool -- 5) Starting SPI implementation: org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi
DEBUG [org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi] (ServerService Thread Pool -- 5) Using parameter [locAddr=null]
DEBUG [org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi] (ServerService Thread Pool -- 5) Using parameter [locPort=47100]
DEBUG [org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi]  Grid runnable started: tcp-disco-srvr
DEBUG [org.apache.ignite.spi.discovery.tcp.ipfinder.kubernetes.TcpDiscoveryKubernetesIpFinder] (ServerService Thread Pool -- 5) Getting Apache Ignite endpoints from: https://kubernetes.default.svc.cluster.local:443/api/v1/namespaces/default/endpoints/ignite
DEBUG [org.apache.ignite.spi.discovery.tcp.ipfinder.kubernetes.TcpDiscoveryKubernetesIpFinder] (ServerService Thread Pool -- 5) Added an address to the list: 10.244.0.93
DEBUG [org.apache.ignite.spi.discovery.tcp.ipfinder.kubernetes.TcpDiscoveryKubernetesIpFinder] (ServerService Thread Pool -- 5) Added an address to the list: 10.244.0.94
ERROR [org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi] (ServerService Thread Pool -- 5) Exception on direct send: Invalid argument (connect failed): java.net.ConnectException: Invalid argument (connect failed)
    at java.net.PlainSocketImpl.socketConnect(Native Method)
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: pgdata
  namespace: default
  annotations:
    volume.alpha.kubernetes.io/storage-class: default
spec:
  accessModes: [ReadWriteOnce]
  resources:
    requests:
      storage: 1Gi
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: ignite
  namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: ignite
  namespace: default
rules:
- apiGroups:
  - ""
  resources:
  - pods
  - endpoints
  verbs:
  - get
  - list
  - watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: ignite
roleRef:
  kind: ClusterRole
  name: ignite
  apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
  name: ignite
  namespace: default
---
apiVersion: v1
kind: Service
metadata:
  name: ignite
  namespace: default
spec:
  clusterIP: None # custom value.
  ports:
    - port: 9042 # custom value.
  selector:
    type: processing-engine-node
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: database-tenant-1
  namespace: default
spec:
  replicas: 1
  selector:
    matchLabels:
      app: database-tenant-1
  template:
    metadata:
      labels:
        app: database-tenant-1
    spec:
      containers:
      - name: database-tenant-1
        image: postgres:12
        env:
        - name: "POSTGRES_USER"
          value: "admin"
        - name: "POSTGRES_PASSWORD"
          value: "admin"
        - name: "POSTGRES_DB"
          value: "tenant1"
        volumeMounts:
        - name: pgdata
          mountPath: /var/lib/postgresql/data
          subPath: postgres
        ports:
        - containerPort: 5432
        readinessProbe:
          exec:
            command: ["psql", "-W", "admin", "-U", "admin", "-d", "tenant1", "-c", "SELECT 1"]
          initialDelaySeconds: 15
          timeoutSeconds: 2
        livenessProbe:
          exec:
            command: ["psql", "-W", "admin", "-U", "admin", "-d", "tenant1", "-c", "SELECT 1"]
          initialDelaySeconds: 45
          timeoutSeconds: 2
      volumes:
        - name: pgdata
          persistentVolumeClaim:
            claimName: pgdata
---
apiVersion: v1
kind: Service
metadata:
  name: database-tenant-1
  namespace: default
  labels:
    app: database-tenant-1
spec:
  type: NodePort
  ports:
   - port: 5432
  selector:
   app: database-tenant-1
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: processing-engine-master
  namespace: default
spec:
  replicas: 1
  selector:
    matchLabels:
      app: processing-engine-master
  template:
    metadata:
      labels:
        app: processing-engine-master
        type: processing-engine-node
    spec:
      serviceAccountName: ignite
      initContainers:
      - name: check-db-ready
        image: postgres:12
        command: ['sh', '-c', 
          'until pg_isready -h database-tenant-1 -p 5432; 
          do echo waiting for database; sleep 2; done;']
      containers:
      - name: xxxx-engine-master
        image: shostettlerprivateregistry.azurecr.io/xxx/xxx-application:4.2.5
        ports:
            - containerPort: 8081
            - containerPort: 11211 # REST port number.
            - containerPort: 47100 # communication SPI port number.
            - containerPort: 47500 # discovery SPI port number.
            - containerPort: 49112 # JMX port number.
            - containerPort: 10800 # SQL port number.
            - containerPort: 10900 # Thin clients port number.
        volumeMounts:
        - name: config-volume
          mountPath: /opt/project-postgres.yml
          subPath: project-postgres.yml
      volumes:
          - name: config-volume
            configMap:
              name: pe-config
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: processing-engine-worker
  namespace: default
spec:
  replicas: 1
  selector:
    matchLabels:
      app: processing-engine-worker
  template:
    metadata:
      labels:
        app: processing-engine-worker
        type: processing-engine-node
    spec:
      serviceAccountName: ignite
      initContainers:
      - name: check-db-ready
        image: postgres:12
        command: ['sh', '-c', 
          'until pg_isready -h database-tenant-1 -p 5432; 
          do echo waiting for database; sleep 2; done;']
      containers:
      - name: xxx-engine-worker
        image: shostettlerprivateregistry.azurecr.io/xxx/xxx-worker:4.2.5
        ports:
            - containerPort: 8081
            - containerPort: 11211 # REST port number.
            - containerPort: 47100 # communication SPI port number.
            - containerPort: 47500 # discovery SPI port number.
            - containerPort: 49112 # JMX port number.
            - containerPort: 10800 # SQL port number.
            - containerPort: 10900 # Thin clients port number.

        volumeMounts:
        - name: config-volume
          mountPath: /opt/project-postgres.yml
          subPath: project-postgres.yml
      volumes:
          - name: config-volume
            configMap:
              name: pe-config
<bean id="tcpDiscoveryKubernetesIpFinder" class="org.apache.ignite.spi.discovery.tcp.ipfinder.kubernetes.TcpDiscoveryKubernetesIpFinder"/>

<property name="discoverySpi">
    <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
        <property name="localPort" value="47500" />
        <property name="localAddress" value="127.0.0.1" />
        <property name="networkTimeout" value="10000" />
        <property name="ipFinder">
            <bean id="tcpDiscoveryKubernetesIpFinder" class="org.apache.ignite.spi.discovery.tcp.ipfinder.kubernetes.TcpDiscoveryKubernetesIpFinder"/>
        </property>
    </bean>
</property>

我希望Pod能够通信,并最终得到以下拓扑拓扑快照:

[ver=1, locNode=a8e6a058, servers=2, clients=0, state=ACTIVE, CPUs=2, offheap=0.24GB, heap=1.5GB]

共有1个答案

宰父深
2023-03-14

您已将发现配置为绑定到localhost:

<property name="localAddress" value="127.0.0.1" />

这意味着来自不同Pod的节点将无法相互连接。请尝试从配置中删除此行。

 类似资料:
  • 我试图找到这个问题的答案,但在kubernetes文档或任何问答论坛中都找不到。 我有一个运行有4个节点的kubernetes集群。是否可以创建第二个集群,重用前一个集群中的一个或多个节点?或者一个节点被限制在单个kubernetes集群中? 我正在使用RKE(用于部署k8集群的牧场工具)运行实际的集群,我发现这个问题让我怀疑这种可能性。 感谢您的澄清。

  • 我需要在K8S中管理部署的建议。我需要使用gitops进行蓝色/绿色部署,这基本上给我留下了两个选择: 这将需要使用helm来管理删除资源等等,并通过helm通过代理管理blue/green,而这又将需要创建重复的部署模板(用于green和blue)。 优点:由掌舵人管理,会删除已删除的资源;似乎是一般的做法。 缺点:由helm管理,可能会搞砸一些东西,特别是在多个失败的部署中;可以创建雪花命名空

  • 我现在正在使用Kubernetes测试在不同名称空间中的部署。在这里,我使用了Kubernetes的掌舵图。在我的图表中,我有deployment.yaml和service.yaml。 当我使用Helm命令定义“namespace”参数时,它不起作用。当我读到这篇文章时,我发现-“helm2 is not overwrite by the--namespace parameter”这个语句。 我尝

  • 我试图使用Kubernetes,Jenkins和我的私有SVN存储库实现CI/CD管道。我计划使用Kubernetes集群,它有3台主机器和15台工作机器/节点。并使用Jenkins部署使用spring Boot开发的微服务。那么当我正在使用Jenkins进行部署时,我如何定义哪个微服务需要部署在kubernetes集群中的哪个节点?。我需要在Pod中指定吗?或者其他定义?

  • 我已经在GCP上创建了一个K8s集群,并且部署了一个应用程序。 然后我把它缩放了一下: 如果节点是一样的,那么为什么IP不一样呢?

  • 我试图在库伯内特斯上编写一个在AWS EKS下工作的网络策略。我想实现的是允许来自同一命名空间的pod/pod流量,并允许从AWS ALB入口转发的外部流量。 AWS ALB入口是在同一个NameSpace下创建的,所以我在想只有使用DENY来自其他命名空间的所有流量就足够了,但是当我使用来自ALB入口负载均衡器(其内部IP地址与pod/pod位于同一名称空间)的流量时是不允许的。然后,如果我添加