当前位置: 首页 > 知识库问答 >
问题:

istio ingressgateway:准备就绪探测失败:HTTP探测失败,状态代码:503

丘普松
2023-03-14

我正在尝试设置istio1。5.1在minicube kubernetes集群中,我遵循Knative的官方文档,在不使用侧车注入的情况下设置istio。我我面临istio入口网关服务的问题,该服务将入口网关服务的外部ip显示为。我已经浏览了这里发布的其他答案,以及许多其他论坛,但没有一个对我有帮助。

使用Minikube v1.9.1与驱动=无头盔v2.16.5 kubectl v1.18.0

我得到以下输出:kubectl get pods——名称空间istio系统

 NAME                                   READY   STATUS    RESTARTS   AGE
 istio-ingressgateway-b599cccd9-qnp5l   1/1     Running   0          60s
 istio-pilot-b67ccb85-mfllc             1/1     Running   0          60s

kubectl获取svc--namespace istio-system

NAME                   TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)                                                                                                                                      
AGE
istio-ingressgateway   LoadBalancer   10.104.37.189    ***<pending>***   15020:30168/TCP,80:31380/TCP,443:31390/TCP,31400:31400/TCP,15029:32576/TCP,15030:31080/TCP,15031:31767/TCP,15032:31812/TCP,15443:30660/TCP   74s
istio-pilot            ClusterIP      10.100.224.212   <none>     15010/TCP,15011/TCP,8080/TCP,15014/TCP                                                                                                       74s

在描述入口吊舱时,我收到一个警告准备就绪探测失败:HTTP探测失败,状态代码:503

有人能帮我解决这个问题吗。谢谢

使用尝试答案的输出进行更新:

kubectl apply-f metallb。亚马尔

  podsecuritypolicy.policy/controller created
  podsecuritypolicy.policy/speaker created
  serviceaccount/controller created
  serviceaccount/speaker created
  clusterrole.rbac.authorization.k8s.io/metallb-system:controller created
  clusterrole.rbac.authorization.k8s.io/metallb-system:speaker created
  role.rbac.authorization.k8s.io/config-watcher created
  role.rbac.authorization.k8s.io/pod-lister created
  clusterrolebinding.rbac.authorization.k8s.io/metallb-system:controller created
  clusterrolebinding.rbac.authorization.k8s.io/metallb-system:speaker created
  rolebinding.rbac.authorization.k8s.io/config-watcher created
  rolebinding.rbac.authorization.k8s.io/pod-lister created
  daemonset.apps/speaker created
  deployment.apps/controller created

$kubectl获得吊舱-n metallb系统

  No resources found in metallb-system namespace.

应用yaml文件后,它显示所有内容都已创建,但我没有在metallb系统命名空间下部署任何pod。

共有1个答案

易风华
2023-03-14

Minikube可能不提供外部IP或负载平衡器,您可能必须在Minikube中使用metalLB。

金属磅:https://metallb.universe.tf/

您也可以查看此以供参考:https://medium.com/@emirmujic/istio-and-metallb-on-Minikube-242281b1134b

这也是一个很好的参考:https://gist.github.com/diegopacheco/9ed4fd9b9a0f341e94e0eb791169ecf9

金属LB YAMl:

apiVersion: v1
kind: Namespace
metadata:
  name: metallb-system
  labels:
    app: metallb
---

apiVersion: v1
kind: ServiceAccount
metadata:
  namespace: metallb-system
  name: controller
  labels:
    app: metallb
---
apiVersion: v1
kind: ServiceAccount
metadata:
  namespace: metallb-system
  name: speaker
  labels:
    app: metallb

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: metallb-system:controller
  labels:
    app: metallb
rules:
- apiGroups: [""]
  resources: ["services"]
  verbs: ["get", "list", "watch", "update"]
- apiGroups: [""]
  resources: ["services/status"]
  verbs: ["update"]
- apiGroups: [""]
  resources: ["events"]
  verbs: ["create", "patch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: metallb-system:speaker
  labels:
    app: metallb
rules:
- apiGroups: [""]
  resources: ["services", "endpoints", "nodes"]
  verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  namespace: metallb-system
  name: config-watcher
  labels:
    app: metallb
rules:
- apiGroups: [""]
  resources: ["configmaps"]
  verbs: ["get", "list", "watch"]
- apiGroups: [""]
  resources: ["events"]
  verbs: ["create"]
---

## Role bindings
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: metallb-system:controller
  labels:
    app: metallb
subjects:
- kind: ServiceAccount
  name: controller
  namespace: metallb-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: metallb-system:controller
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: metallb-system:speaker
  labels:
    app: metallb
subjects:
- kind: ServiceAccount
  name: speaker
  namespace: metallb-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: metallb-system:speaker
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  namespace: metallb-system
  name: config-watcher
  labels:
    app: metallb
subjects:
- kind: ServiceAccount
  name: controller
- kind: ServiceAccount
  name: speaker
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: config-watcher
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  namespace: metallb-system
  name: speaker
  labels:
    app: metallb
    component: speaker
spec:
  selector:
    matchLabels:
      app: metallb
      component: speaker
  template:
    metadata:
      labels:
        app: metallb
        component: speaker
      annotations:
        prometheus.io/scrape: "true"
        prometheus.io/port: "7472"
    spec:
      serviceAccountName: speaker
      terminationGracePeriodSeconds: 0
      hostNetwork: true
      containers:
      - name: speaker
        image: metallb/speaker:v0.7.1
        imagePullPolicy: IfNotPresent
        args:
        - --port=7472
        - --config=config
        env:
        - name: METALLB_NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
        ports:
        - name: monitoring
          containerPort: 7472
        resources:
          limits:
            cpu: 100m
            memory: 100Mi

        securityContext:
          allowPrivilegeEscalation: false
          readOnlyRootFilesystem: true
          capabilities:
            drop:
            - all
            add:
            - net_raw

---
apiVersion: apps/v1
kind: Deployment
metadata:
  namespace: metallb-system
  name: controller
  labels:
    app: metallb
    component: controller
spec:
  revisionHistoryLimit: 3
  selector:
    matchLabels:
      app: metallb
      component: controller
  template:
    metadata:
      labels:
        app: metallb
        component: controller
      annotations:
        prometheus.io/scrape: "true"
        prometheus.io/port: "7472"
    spec:
      serviceAccountName: controller
      terminationGracePeriodSeconds: 0
      securityContext:
        runAsNonRoot: true
        runAsUser: 65534 # nobody
      containers:
      - name: controller
        image: metallb/controller:v0.7.1
        imagePullPolicy: IfNotPresent
        args:
        - --port=7472
        - --config=config
        ports:
        - name: monitoring
          containerPort: 7472
        resources:
          limits:
            cpu: 100m
            memory: 100Mi

        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            drop:
            - all
          readOnlyRootFilesystem: true 
 类似资料:
  • 我试图在Azure中新部署的aks Kuberbetes(1.9.6)集群中部署zalenium helm chart。但我不让它起作用。豆荚给出了下面的日志: 描述pod给出:警告不健康4M(x12超过6M)kubelet,aks-agentpool-93668098-0就绪探测失败:HTTP探测失败,状态代码:502 Zalenium图像版本:Dosel/Zalenium:3 如果使用Kube

  • 正在读取此文档https://docs.spring.io/spring-cloud-dataflow/docs/2.0.2.release/reference/htmlsingle/#_application_and_server_properties deployer.appname.kubernetes.probecredentialssecret=myprobesecret 但是,如果只使

  • 在上使用helm upgrade命令运行容器时,出现了以下错误: “准备探测失败:获取http://172.17.0.6:3003/:拨号tcp 172.17.0.6:3003:GetSockopt:连接拒绝”。

  • ReadInessProbe:指示容器是否准备好响应请求。如果就绪探测失败,endpoint控制器将从与POD匹配的所有服务的endpoint中删除POD的IP地址。初始延迟之前的默认就绪状态是失败。如果容器不提供就绪探测,则默认状态为成功 如果准备状态探测失败(并且Pod的IP地址从endpoint移除),接下来会发生什么?是否会再次检查吊舱的准备状态?在最初的延迟后,它会再次检查吗?Pod的I

  • 我有一个配置了外部网络虚拟交换机的HyperV。K8S被配置为使用法兰绒覆盖(vxlan),如下所示:https://docs.microsoft.com/en-us/virtualization/windowscontainers/kubernetes/network-topologies。 你知道如何解决这个问题吗? 更新:提供YAML: 我不知道这是否相关,因为我不能像上面所说的那样从不同的

  • 我使用的是标准的skydns RC/SVC YAMLS。 吊舱描述: (etcd) 我还将放入kube2sky容器中,ca.crt与服务器上的ca.crt匹配。