当前位置: 首页 > 知识库问答 >
问题:

无法在Istio代理背后的k8s中建立到VerneMQ群集的mqtt连接

文心思
2023-03-14

我正在设置k8s上的Prem k8s集群。对于测试,我在使用Kubeadm设置的vm上使用单节点集群。我的需求包括在k8s中运行MQTT集群(vernemq)并通过Ingress(istio)进行外部访问。

在不部署ingress的情况下,我可以通过NodePort或LoadBalancer服务连接(mosquitto_sub)。

Istio是使用istioctl install--set profile=demo安装的

    null

exposited-with-loadBalancer命名空间中,我使用LoadBalancer服务部署了vernemq。它起作用了,这就是我知道可以访问VerneMQ的方法(使用mosquitto_sub-h -p 1883-t hello ,主机是SVC/VerneMQ的ClusterIP或ExternalIP)。可在主机上访问仪表板:8888/状态,仪表板上的“客户端联机”增量。

exposite-with-istio中,我将vernemq与ClusterIP服务、Istios网关和VirtualService一起部署。立即istio-代理注入后,mosquitto_sub不能通过SVC/Vernemq IP订阅,也不能通过istio ingress(gateway)IP订阅。命令只是永远挂起,不断地重试。同时,vernemq仪表板endpoint可以通过服务ip和istio网关访问。

我猜必须配置istio代理才能使mqtt工作。

Name:                     istio-ingressgateway
Namespace:                istio-system
Labels:                   app=istio-ingressgateway
                          install.operator.istio.io/owning-resource=installed-state
                          install.operator.istio.io/owning-resource-namespace=istio-system
                          istio=ingressgateway
                          istio.io/rev=default
                          operator.istio.io/component=IngressGateways
                          operator.istio.io/managed=Reconcile
                          operator.istio.io/version=1.7.0
                          release=istio
Annotations:              Selector:  app=istio-ingressgateway,istio=ingressgateway
Type:                     LoadBalancer
IP:                       10.100.213.45
LoadBalancer Ingress:     192.168.100.240
Port:                     status-port  15021/TCP
TargetPort:               15021/TCP
Port:                     http2  80/TCP
TargetPort:               8080/TCP
Port:                     https  443/TCP
TargetPort:               8443/TCP
Port:                     tcp  31400/TCP
TargetPort:               31400/TCP
Port:                     tls  15443/TCP
TargetPort:               15443/TCP
Session Affinity:         None
External Traffic Policy:  Cluster
...
2020-08-24T07:57:52.294477Z debug   envoy filter    original_dst: New connection accepted
2020-08-24T07:57:52.294516Z debug   envoy filter    tls inspector: new connection accepted
2020-08-24T07:57:52.294532Z debug   envoy filter    http inspector: new connection accepted
2020-08-24T07:57:52.294580Z debug   envoy filter    [C5645] new tcp proxy session
2020-08-24T07:57:52.294614Z debug   envoy filter    [C5645] Creating connection to cluster inbound|1883|mqtt|vernemq.test.svc.cluster.local
2020-08-24T07:57:52.294638Z debug   envoy pool  creating a new connection
2020-08-24T07:57:52.294671Z debug   envoy pool  [C5646] connecting
2020-08-24T07:57:52.294684Z debug   envoy connection    [C5646] connecting to 127.0.0.1:1883
2020-08-24T07:57:52.294725Z debug   envoy connection    [C5646] connection in progress
2020-08-24T07:57:52.294746Z debug   envoy pool  queueing request due to no available connections
2020-08-24T07:57:52.294750Z debug   envoy conn_handler  [C5645] new connection
2020-08-24T07:57:52.294768Z debug   envoy connection    [C5646] delayed connection error: 111
2020-08-24T07:57:52.294772Z debug   envoy connection    [C5646] closing socket: 0
2020-08-24T07:57:52.294783Z debug   envoy pool  [C5646] client disconnected
2020-08-24T07:57:52.294790Z debug   envoy filter    [C5645] Creating connection to cluster inbound|1883|mqtt|vernemq.test.svc.cluster.local
2020-08-24T07:57:52.294794Z debug   envoy connection    [C5645] closing data_to_write=0 type=1
2020-08-24T07:57:52.294796Z debug   envoy connection    [C5645] closing socket: 1
2020-08-24T07:57:52.294864Z debug   envoy wasm  wasm log: [extensions/stats/plugin.cc:609]::report() metricKey cache hit , stat=12
2020-08-24T07:57:52.294882Z debug   envoy wasm  wasm log: [extensions/stats/plugin.cc:609]::report() metricKey cache hit , stat=16
2020-08-24T07:57:52.294885Z debug   envoy wasm  wasm log: [extensions/stats/plugin.cc:609]::report() metricKey cache hit , stat=20
2020-08-24T07:57:52.294887Z debug   envoy wasm  wasm log: [extensions/stats/plugin.cc:609]::report() metricKey cache hit , stat=24
2020-08-24T07:57:52.294891Z debug   envoy conn_handler  [C5645] adding to cleanup list
2020-08-24T07:57:52.294949Z debug   envoy pool  [C5646] connection destroyed

这是来自Istio-IngressAgateway的日志。IP10.244.243.205属于VerneMQ pod,而不是服务(可能是预定的)。

2020-08-24T08:48:31.536593Z debug   envoy filter    [C13236] new tcp proxy session
2020-08-24T08:48:31.536702Z debug   envoy filter    [C13236] Creating connection to cluster outbound|1883||vernemq.test.svc.cluster.local
2020-08-24T08:48:31.536728Z debug   envoy pool  creating a new connection
2020-08-24T08:48:31.536778Z debug   envoy pool  [C13237] connecting
2020-08-24T08:48:31.536784Z debug   envoy connection    [C13237] connecting to 10.244.243.205:1883
2020-08-24T08:48:31.537074Z debug   envoy connection    [C13237] connection in progress
2020-08-24T08:48:31.537116Z debug   envoy pool  queueing request due to no available connections
2020-08-24T08:48:31.537138Z debug   envoy conn_handler  [C13236] new connection
2020-08-24T08:48:31.537181Z debug   envoy connection    [C13237] connected
2020-08-24T08:48:31.537204Z debug   envoy pool  [C13237] assigning connection
2020-08-24T08:48:31.537221Z debug   envoy filter    TCP:onUpstreamEvent(), requestedServerName: 
2020-08-24T08:48:31.537880Z debug   envoy misc  Unknown error code 104 details Connection reset by peer
2020-08-24T08:48:31.537907Z debug   envoy connection    [C13237] remote close
2020-08-24T08:48:31.537913Z debug   envoy connection    [C13237] closing socket: 0
2020-08-24T08:48:31.537938Z debug   envoy pool  [C13237] client disconnected
2020-08-24T08:48:31.537953Z debug   envoy connection    [C13236] closing data_to_write=0 type=0
2020-08-24T08:48:31.537958Z debug   envoy connection    [C13236] closing socket: 1
2020-08-24T08:48:31.538156Z debug   envoy conn_handler  [C13236] adding to cleanup list
2020-08-24T08:48:31.538191Z debug   envoy pool  [C13237] connection destroyed

apiVersion: v1
kind: Namespace
metadata:
  name: exposed-with-istio
  labels:
    istio-injection: enabled
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: vernemq
  namespace: exposed-with-istio
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: endpoint-reader
  namespace: exposed-with-istio
rules:
  - apiGroups: ["", "extensions", "apps"]
    resources: ["endpoints", "deployments", "replicasets", "pods"]
    verbs: ["get", "list"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: endpoint-reader
  namespace: exposed-with-istio
subjects:
  - kind: ServiceAccount
    name: vernemq
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: endpoint-reader
---
apiVersion: v1
kind: Service
metadata:
  name: vernemq
  namespace: exposed-with-istio
  labels:
    app: vernemq
spec:
  selector:
    app: vernemq
  type: ClusterIP
  ports:
    - port: 4369
      name: empd
    - port: 44053
      name: vmq
    - port: 8888
      name: http-dashboard
    - port: 1883
      name: tcp-mqtt
      targetPort: 1883
    - port: 9001
      name: tcp-mqtt-ws
      targetPort: 9001
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: vernemq
  namespace: exposed-with-istio
spec:
  replicas: 1
  selector:
    matchLabels:
      app: vernemq
  template:
    metadata:
      labels:
        app: vernemq
    spec:
      serviceAccountName: vernemq
      containers:
        - name: vernemq
          image: vernemq/vernemq
          ports:
            - containerPort: 1883
              name: tcp-mqtt
              protocol: TCP
            - containerPort: 8080
              name: tcp-mqtt-ws
            - containerPort: 8888
              name: http-dashboard
            - containerPort: 4369
              name: epmd
            - containerPort: 44053
              name: vmq
            - containerPort: 9100-9109 # shortened
          env:
            - name: DOCKER_VERNEMQ_ACCEPT_EULA
              value: "yes"
            - name: DOCKER_VERNEMQ_ALLOW_ANONYMOUS
              value: "on"
            - name: DOCKER_VERNEMQ_listener__tcp__allowed_protocol_versions
              value: "3,4,5"
            - name: DOCKER_VERNEMQ_allow_register_during_netsplit
              value: "on"
            - name: DOCKER_VERNEMQ_DISCOVERY_KUBERNETES
              value: "1"
            - name: DOCKER_VERNEMQ_KUBERNETES_APP_LABEL
              value: "vernemq"
            - name: DOCKER_VERNEMQ_KUBERNETES_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            - name: MY_POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: DOCKER_VERNEMQ_ERLANG__DISTRIBUTION__PORT_RANGE__MINIMUM
              value: "9100"
            - name: DOCKER_VERNEMQ_ERLANG__DISTRIBUTION__PORT_RANGE__MAXIMUM
              value: "9109"
            - name: DOCKER_VERNEMQ_KUBERNETES_INSECURE
              value: "1"
---
apiVersion: v1
kind: Namespace
metadata:
  name: exposed-with-loadbalancer
---
... the rest it the same except for namespace and service type ...
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
  name: vernemq-destination
  namespace: exposed-with-istio
spec:
  host: vernemq.exposed-with-istio.svc.cluster.local
  trafficPolicy:
    tls:
      mode: DISABLE
---
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
  name: vernemq-gateway
  namespace: exposed-with-istio
spec:
  selector:
    istio: ingressgateway # use istio default controller
  servers:
  - port:
      number: 31400
      name: tcp
      protocol: TCP
    hosts:
    - "*"
  - port:
      number: 80
      name: http
      protocol: HTTP
    hosts:
    - "*"
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: vernemq-virtualservice
  namespace: exposed-with-istio
spec:
  hosts:
  - "*"
  gateways:
  - vernemq-gateway
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: vernemq-virtualservice
  namespace: exposed-with-istio
spec:
  hosts:
  - "*"
  gateways:
  - vernemq-gateway
  http:
  - match:
    - uri:
        prefix: /status
    route:
    - destination:
        host: vernemq.exposed-with-istio.svc.cluster.local
        port:
          number: 8888
  tcp:
  - match:
    - port: 31400
    route:
    - destination:
        host: vernemq.exposed-with-istio.svc.cluster.local
        port:
          number: 1883

以下是建议的输出:

**但您的公使日志显示了一个问题:公使杂项未知错误代码104详细连接重置由对等方和公使池[C5648]客户端断开连接。

0.0.0.0        8888  App: HTTP                                               Route: 8888
0.0.0.0        8888  ALL                                                     PassthroughCluster
10.107.205.214 1883  ALL                                                     Cluster: outbound|1883||vernemq.exposed-with-istio.svc.cluster.local
0.0.0.0        15006 ALL                                                     Cluster: inbound|1883|tcp-mqtt|vernemq.exposed-with-istio.svc.cluster.local
...                                            Cluster: outbound|853||istiod.istio-system.svc.cluster.local
10.107.205.214 1883  ALL                                                     Cluster: outbound|1883||vernemq.exposed-with-istio.svc.cluster.local
10.108.218.134 3000  App: HTTP                                               Route: grafana.istio-system.svc.cluster.local:3000
10.108.218.134 3000  ALL                                                     Cluster: outbound|3000||grafana.istio-system.svc.cluster.local
10.107.205.214 4369  App: HTTP                                               Route: vernemq.exposed-with-istio.svc.cluster.local:4369
10.107.205.214 4369  ALL                                                     Cluster: outbound|4369||vernemq.exposed-with-istio.svc.cluster.local
0.0.0.0        8888  App: HTTP                                               Route: 8888
0.0.0.0        8888  ALL                                                     PassthroughCluster
10.107.205.214 9001  ALL                                                     Cluster: outbound|9001||vernemq.exposed-with-istio.svc.cluster.local
0.0.0.0        9090  App: HTTP                                               Route: 9090
0.0.0.0        9090  ALL                                                     PassthroughCluster
10.96.0.10     9153  App: HTTP                                               Route: kube-dns.kube-system.svc.cluster.local:9153
10.96.0.10     9153  ALL                                                     Cluster: outbound|9153||kube-dns.kube-system.svc.cluster.local
0.0.0.0        9411  App: HTTP                                               ...
0.0.0.0        15006 Trans: tls; App: TCP TLS; Addr: 0.0.0.0/0               InboundPassthroughClusterIpv4
0.0.0.0        15006 Addr: 0.0.0.0/0                                         InboundPassthroughClusterIpv4
0.0.0.0        15006 App: TCP TLS                                            Cluster: inbound|9001|tcp-mqtt-ws|vernemq.exposed-with-istio.svc.cluster.local
0.0.0.0        15006 ALL                                                     Cluster: inbound|9001|tcp-mqtt-ws|vernemq.exposed-with-istio.svc.cluster.local
0.0.0.0        15006 Trans: tls; App: TCP TLS                                Cluster: inbound|44053|vmq|vernemq.exposed-with-istio.svc.cluster.local
0.0.0.0        15006 ALL                                                     Cluster: inbound|44053|vmq|vernemq.exposed-with-istio.svc.cluster.local
0.0.0.0        15006 Trans: tls                                              Cluster: inbound|44053|vmq|vernemq.exposed-with-istio.svc.cluster.local
0.0.0.0        15006 ALL                                                     Cluster: inbound|4369|empd|vernemq.exposed-with-istio.svc.cluster.local
0.0.0.0        15006 Trans: tls; App: TCP TLS                                Cluster: inbound|4369|empd|vernemq.exposed-with-istio.svc.cluster.local
0.0.0.0        15006 Trans: tls                                              Cluster: inbound|4369|empd|vernemq.exposed-with-istio.svc.cluster.local
0.0.0.0        15006 ALL                                                     Cluster: inbound|1883|tcp-mqtt|vernemq.exposed-with-istio.svc.cluster.local
0.0.0.0        15006 App: TCP TLS                                            Cluster: inbound|1883|tcp-mqtt|vernemq.exposed-with-istio.svc.cluster.local
0.0.0.0        15010 App: HTTP                                               Route: 15010
0.0.0.0        15010 ALL                                                     PassthroughCluster
10.106.166.154 15012 ALL                                                     Cluster: outbound|15012||istiod.istio-system.svc.cluster.local
0.0.0.0        15014 App: HTTP                                               Route: 15014
0.0.0.0        15014 ALL                                                     PassthroughCluster
0.0.0.0        15021 ALL                                                     Inline Route: /healthz/ready*
10.100.213.45  15021 App: HTTP                                               Route: istio-ingressgateway.istio-system.svc.cluster.local:15021
10.100.213.45  15021 ALL                                                     Cluster: outbound|15021||istio-ingressgateway.istio-system.svc.cluster.local
0.0.0.0        15090 ALL                                                     Inline Route: /stats/prometheus*
10.100.213.45  15443 ALL                                                     Cluster: outbound|15443||istio-ingressgateway.istio-system.svc.cluster.local
10.105.193.108 15443 ALL                                                     Cluster: outbound|15443||istio-egressgateway.istio-system.svc.cluster.local
0.0.0.0        20001 App: HTTP                                               Route: 20001
0.0.0.0        20001 ALL                                                     PassthroughCluster
10.100.213.45  31400 ALL                                                     Cluster: outbound|31400||istio-ingressgateway.istio-system.svc.cluster.local
10.107.205.214 44053 App: HTTP                                               Route: vernemq.exposed-with-istio.svc.cluster.local:44053
10.107.205.214 44053 ALL                                                     Cluster: outbound|44053||vernemq.exposed-with-istio.svc.cluster.local
10.244.243.206:1883              HEALTHY     OK                outbound|1883||vernemq.exposed-with-istio.svc.cluster.local
127.0.0.1:1883                   HEALTHY     OK                inbound|1883|tcp-mqtt|vernemq.exposed-with-istio.svc.cluster.local
ENDPOINT                         STATUS      OUTLIER CHECK     CLUSTER
10.101.200.113:9411              HEALTHY     OK                zipkin
10.106.166.154:15012             HEALTHY     OK                xds-grpc
10.211.55.14:6443                HEALTHY     OK                outbound|443||kubernetes.default.svc.cluster.local
10.244.243.193:53                HEALTHY     OK                outbound|53||kube-dns.kube-system.svc.cluster.local
10.244.243.193:9153              HEALTHY     OK                outbound|9153||kube-dns.kube-system.svc.cluster.local
10.244.243.195:53                HEALTHY     OK                outbound|53||kube-dns.kube-system.svc.cluster.local
10.244.243.195:9153              HEALTHY     OK                outbound|9153||kube-dns.kube-system.svc.cluster.local
10.244.243.197:15010             HEALTHY     OK                outbound|15010||istiod.istio-system.svc.cluster.local
10.244.243.197:15012             HEALTHY     OK                outbound|15012||istiod.istio-system.svc.cluster.local
10.244.243.197:15014             HEALTHY     OK                outbound|15014||istiod.istio-system.svc.cluster.local
10.244.243.197:15017             HEALTHY     OK                outbound|443||istiod.istio-system.svc.cluster.local
10.244.243.197:15053             HEALTHY     OK                outbound|853||istiod.istio-system.svc.cluster.local
10.244.243.198:8080              HEALTHY     OK                outbound|80||istio-egressgateway.istio-system.svc.cluster.local
10.244.243.198:8443              HEALTHY     OK                outbound|443||istio-egressgateway.istio-system.svc.cluster.local
10.244.243.198:15443             HEALTHY     OK                outbound|15443||istio-egressgateway.istio-system.svc.cluster.local
10.244.243.199:8080              HEALTHY     OK                outbound|80||istio-ingressgateway.istio-system.svc.cluster.local
10.244.243.199:8443              HEALTHY     OK                outbound|443||istio-ingressgateway.istio-system.svc.cluster.local
10.244.243.199:15021             HEALTHY     OK                outbound|15021||istio-ingressgateway.istio-system.svc.cluster.local
10.244.243.199:15443             HEALTHY     OK                outbound|15443||istio-ingressgateway.istio-system.svc.cluster.local
10.244.243.199:31400             HEALTHY     OK                outbound|31400||istio-ingressgateway.istio-system.svc.cluster.local
10.244.243.201:3000              HEALTHY     OK                outbound|3000||grafana.istio-system.svc.cluster.local
10.244.243.202:9411              HEALTHY     OK                outbound|9411||zipkin.istio-system.svc.cluster.local
10.244.243.202:16686             HEALTHY     OK                outbound|80||tracing.istio-system.svc.cluster.local
10.244.243.203:9090              HEALTHY     OK                outbound|9090||kiali.istio-system.svc.cluster.local
10.244.243.203:20001             HEALTHY     OK                outbound|20001||kiali.istio-system.svc.cluster.local
10.244.243.204:9090              HEALTHY     OK                outbound|9090||prometheus.istio-system.svc.cluster.local
10.244.243.206:1883              HEALTHY     OK                outbound|1883||vernemq.exposed-with-istio.svc.cluster.local
10.244.243.206:4369              HEALTHY     OK                outbound|4369||vernemq.exposed-with-istio.svc.cluster.local
10.244.243.206:8888              HEALTHY     OK                outbound|8888||vernemq.exposed-with-istio.svc.cluster.local
10.244.243.206:9001              HEALTHY     OK                outbound|9001||vernemq.exposed-with-istio.svc.cluster.local
10.244.243.206:44053             HEALTHY     OK                outbound|44053||vernemq.exposed-with-istio.svc.cluster.local
127.0.0.1:1883                   HEALTHY     OK                inbound|1883|tcp-mqtt|vernemq.exposed-with-istio.svc.cluster.local
127.0.0.1:4369                   HEALTHY     OK                inbound|4369|empd|vernemq.exposed-with-istio.svc.cluster.local
127.0.0.1:8888                   HEALTHY     OK                inbound|8888|http-dashboard|vernemq.exposed-with-istio.svc.cluster.local
127.0.0.1:9001                   HEALTHY     OK                inbound|9001|tcp-mqtt-ws|vernemq.exposed-with-istio.svc.cluster.local
127.0.0.1:15000                  HEALTHY     OK                prometheus_stats
127.0.0.1:15020                  HEALTHY     OK                agent
127.0.0.1:44053                  HEALTHY     OK                inbound|44053|vmq|vernemq.exposed-with-istio.svc.cluster.local
unix://./etc/istio/proxy/SDS     HEALTHY     OK                sds-grpc
NOTE: This output only contains routes loaded via RDS.
NAME                                                                         DOMAINS                               MATCH                  VIRTUAL SERVICE
istio-ingressgateway.istio-system.svc.cluster.local:15021                    istio-ingressgateway.istio-system     /*                     
istiod.istio-system.svc.cluster.local:853                                    istiod.istio-system                   /*                     
20001                                                                        kiali.istio-system                    /*                     
15010                                                                        istiod.istio-system                   /*                     
15014                                                                        istiod.istio-system                   /*                     
vernemq.exposed-with-istio.svc.cluster.local:4369                            vernemq                               /*                     
vernemq.exposed-with-istio.svc.cluster.local:44053                           vernemq                               /*                     
kube-dns.kube-system.svc.cluster.local:9153                                  kube-dns.kube-system                  /*                     
8888                                                                         vernemq                               /*                     
80                                                                           istio-egressgateway.istio-system      /*                     
80                                                                           istio-ingressgateway.istio-system     /*                     
80                                                                           tracing.istio-system                  /*                     
grafana.istio-system.svc.cluster.local:3000                                  grafana.istio-system                  /*                     
9411                                                                         zipkin.istio-system                   /*                     
9090                                                                         kiali.istio-system                    /*                     
9090                                                                         prometheus.istio-system               /*                     
inbound|8888|http-dashboard|vernemq.exposed-with-istio.svc.cluster.local     *                                     /*                     
inbound|44053|vmq|vernemq.exposed-with-istio.svc.cluster.local               *                                     /*                     
inbound|8888|http-dashboard|vernemq.exposed-with-istio.svc.cluster.local     *                                     /*                     
inbound|4369|empd|vernemq.exposed-with-istio.svc.cluster.local               *                                     /*                     
inbound|44053|vmq|vernemq.exposed-with-istio.svc.cluster.local               *                                     /*                     
                                                                             *                                     /stats/prometheus*     
InboundPassthroughClusterIpv4                                                *                                     /*                     
inbound|4369|empd|vernemq.exposed-with-istio.svc.cluster.local               *                                     /*                     
InboundPassthroughClusterIpv4                                                *                                     /*                     
                                                                             *                                     /healthz/ready*    

共有1个答案

晏兴发
2023-03-14

首先,我建议启用pod的特使日志记录

kubectl exec -it <pod-name> -c istio-proxy -- curl -X POST http://localhost:15000/logging?level=trace

不跟随istio侧车日志

kubectl logs <pod-name> -c isito-proxy -f

更新

- port: 1883
  name: tcp-mqtt
  targetPort: 1883
 类似资料:
  • 我想使用ActiveMQ创建一个代理来连接到另一个蚊子代理。然后,我可以使用ActiveMQ接收来自蚊子代理的消息。 我现在要做的是: 将ActiveMQ与JBoss EAP 6.3集成。 在ActiveMQ中创建MQTT代理:http://activemq.apache.org/mqtt.html 但是,在我添加代理配置中的网络连接器之后.xml: 服务器启动后显示异常: "vm://local

  • 问题是我无法通过生产者脚本向集群中的任何代理发送消息。 该设置是单个动物园管理员服务器,在具有默认设置的ip (192.168.10.2:2181)上运行。 此外,还有3个代理正在运行(192.168.1001:9092192.168.11.1002:909292.168.10103:9092)。 在旋转经纪人之后,我可以在动物园管理员外壳中看到3个经纪人是连接的。 输出: 连接到192.168.

  • 我关注了excellant博客[使用kubeadm在azure上创建非托管k8s群集的中型博客][1]:https://medium.com/@patnaikshekhar/creating-a-kubernetes-cluster-in-azure-using-kubeadm-96e7c1ede4a并在azure免费层订阅上创建了2个worker和1个master k8s群集。我可以让kubec

  • 我在Azure Ubuntu16.04VM上安装了VerneMQ,并打开了1883端口入站和出站。VerneMQ配置为侦听端口1883,已打开允许匿名连接,并且VerneMQ已启动(VerneMQ start)。 我已经在我的Windows10 PC上创建了一个C#控制台应用程序,用于向VM上的MQTT代理发送消息。我正在使用NuGet包M2Mqtt版本4.3.0,并在我的Windows 10防火

  • 建立mqtt连接。 请求方式: "|4|1|1|host|port|iotid|iotpwd|\r" 参数: host 物联网连接host port mqtt连接的端口 iotid 物联网账号 iotpwd 物联网账号密码 返回值: "|4|1|1|1|\r" mqtt连接状态:连接成功 "|4|1|1|2|reason|\r" mqtt连接状态:连接失败,字符串reason表示失败的原因 Ard