当前位置: 首页 > 软件库 > 云计算 > 云原生 >

kube-ingress-aws-controller

授权协议 MIT License
开发语言 Google Go
所属分类 云计算、 云原生
软件类型 开源软件
地区 不详
投 递 者 融修平
操作系统 跨平台
开源组织
适用人群 未知
 软件概览

Kubernetes Ingress Controller for AWS

This is an ingress controller for Kubernetes — the open-source container deployment,scaling, and management system — on AWS. It runs inside a Kubernetes cluster to monitor changes to your ingressresources and orchestrate AWS Load Balancers accordingly.

Build Status

This ingress controller uses the EC2 instance metadata of the worker node where it's currently running to find theadditional details about the cluster provisioned by Kubernetes on top of AWS.This information is used to manage AWS resources for each ingress objects of the cluster.

Features

  • Uses CloudFormation to guarantee consistent state
  • Automatic discovery of SSL certificates
  • Automatic forwarding of requests to all Worker Nodes, even with auto scaling
  • Automatic cleanup of unnecessary managed resources
  • Support for both Application Load Balancers and Network Load Balancers.
  • Support for internet-facing and internal load balancers
  • Support for ignoring cluster-internal ingress, that only have --cluster-local-domain=cluster.local domains.
  • Support for denying traffic for internal domains.
  • Support for multiple Auto Scaling Groups
  • Support for instances that are not part of Auto Scaling Group
  • Support for SSLPolicy, set default and per ingress
  • Support for CloudWatch Alarm configuration
  • Can be used in clusters created by Kops, see our deployment guide for Kops
  • Support Multiple TLS Certificates per ALB (SNI).
  • Support for AWS WAF and WAFv2

Upgrade

<v0.12.0 to >=0.12.0

Version v0.12.0 changes Network Load Balancer type handling if Application Load Balancer type feature is requested. See Load Balancers types notes for details.

<v0.11.0 to >=0.11.0

Version v0.11.0 changes the default apiVersion used for fetching/updatingingresses from extensions/v1beta1 to networking.k8s.io/v1beta1. For this towork the controller needs to have permissions to list ingresses andupdate, patch ingresses/status from the networking.k8s.io apiGroup.See deployment example. To fallback tothe old behavior you can set the apiVersion via the --ingress-api-versionflag. Value must be extensions/v1beta1 or networking.k8s.io/v1beta1(default).

<v0.9.0 to >=v0.9.0

Version v0.9.0 changes the internal flag parsing library tokingpin this means flags are now defined with -- (two dashes)instead of a single dash. You need to change all the flags like this:-stack-termination-protection -> --stack-termination-protection beforerunning v0.9.0 of the controller.

<v0.8.0 to >=v0.8.0

Version v0.8.0 added certificate verification check to automatically ignoreself-signed and certificates from internal CAs. The IAM role used by the controllernow needs the acm:GetCertificate permission. acm:DescribeCertificate permissionis no longer needed and can be removed from the role.

<v0.7.0 to >=v0.7.0

Version v0.7.0 deletes the annotationzalando.org/aws-load-balancer-ssl-cert-domain, which we do notconsider as feature since we have SNI enabled ALBs.

<v0.6.0 to >=v0.6.0

Version v0.6.0 introduced support for Multiple TLS Certificates per ALB(SNI). When upgrading your ALBs will automatically be aggregated to a singleALB with multiple certificates configured.It also adds support for attaching single EC2 instances and multipleAutoScalingGroups to the ALBs therefore you must ensure you have the correctinstance filter defined before upgrading. The default filter istag:kubernetes.io/cluster/<cluster-id>=owned tag-key=k8s.io/role/node seeHow it works for more information on how to configure this.

<v0.5.0 to >=v0.5.0

Version v0.5.0 introduced support for both internet-facing and internalload balancers. For this change we had to change the naming of theCloudFormation stacks created by the controller. To upgrade from v0.4.* tov0.5.0 no changes are needed, but since the naming change of the stacksmigrating back down to a v0.4.* version will not be non-disruptive as it willbe unable to manage the stacks with the new naming scheme. Deleting the stacksmanually will allow for a working downgrade.

<v0.4.0 to >=v0.4.0

In versions before v0.4.0 we used AWS Tags that were set by CloudFormation automatically to findsome AWS resources.This behavior has been changed to use custom non cloudformation tags.

In order to update to v0.4.0, you have to add the following tags to your AWs LoadbalancerSecurityGroup before updating:

  • kubernetes:application=kube-ingress-aws-controller
  • kubernetes.io/cluster/<cluster-id>=owned

Additionally you must ensure that the instance where the ingress-controller isrunning has the clusterID tag kubernetes.io/cluster/<cluster-id>=owned set(was ClusterID=<cluster-id> before v0.4.0).

Ingress annotations

Overview of configuration which can be set via Ingress annotations.

Annotations

Name Value Default
alb.ingress.kubernetes.io/ip-address-type ipv4 | dualstack ipv4
zalando.org/aws-load-balancer-ssl-cert string N/A
zalando.org/aws-load-balancer-scheme internal | internet-facing internet-facing
zalando.org/aws-load-balancer-shared true | false true
zalando.org/aws-load-balancer-security-group string N/A
zalando.org/aws-load-balancer-ssl-policy string ELBSecurityPolicy-2016-08
zalando.org/aws-load-balancer-type nlb | alb alb
zalando.org/aws-load-balancer-http2 true | false true
zalando.org/aws-waf-web-acl-id string N/A
kubernetes.io/ingress.class string N/A

The defaults can also be configured globally via a flag on the controller.

Load Balancers types

The controller supports both Application Load Balancers and NetworkLoad Balancers. Below is an overview of which features can be used withthe individual Load Balancer types.

Feature Application Load Balancer Network Load Balancer
HTTPS ✔️ ✔️
HTTP ✔️ ✔️ --nlb-http-enabled
HTTP -> HTTPS redirect ✔️ --redirect-http-to-https ✖️
Cross Zone Load Balancing ✔️ (only option) ✔️ --nlb-cross-zone
Dualstack support ✔️ --ip-addr-type=dualstack ✖️
Idle Timeout ✔️ --idle-connection-timeout ✖️
Custom Security Group ✔️ ✖️
Web Application Firewall (WAF) ✔️ ✖️
HTTP/2 Support (not relevant)

To facilitate default load balancer type switch from Application to Network when the default load balancer type is Network(--load-balancer-type="network") and Custom Security Group (zalando.org/aws-load-balancer-security-group) orWeb Application Firewall (zalando.org/aws-waf-web-acl-id) annotation is present the controller configures Application Load Balancer.If zalando.org/aws-load-balancer-type: nlb annotation is also present then controller ignores the configuration and logs an error.

AWS Tags

SecurityGroup auto detection needs the following AWS Tags on theSecurityGroup:

  • kubernetes.io/cluster/<cluster-id>=owned
  • kubernetes:application=<controller-id>, controller-id defaults tokube-ingress-aws-controller and can be set by flag --controller-id=<my-ctrl-id>.

AutoScalingGroup auto detection needs the same AWS tags on theAutoScalingGroup as defined for the SecurityGroup.

In case you want to attach/detach single EC2 instances to the ALBTargetGroup, you have to have the same <cluster-id> set as on therunning kube-ingress-aws-controller. Normally this would bekubernetes.io/cluster/<cluster-id>=owned.

Development Status

This controller is used in production since Q1 2017. It aims to be out-of-the-box useful for anyonerunning Kubernetes. Jump down to the Quickstart to try it out—and please let us know if you havetrouble getting it running by filing anIssue.If you created your cluster with Kops, see our deployment guide for Kops

As of this writing, it's being used in production use cases at Zalando, and can be considered battle-tested in this setup. We're actively seeking devs/teams/companies to try it out and share feedback so we canmake improvements.

We are also eager to bring new contributors on board. See our contributor guidelinesto get started, or claim a "Help Wanted" item.

Why We Created This Ingress Controller

The maintainers of this project are building an infrastructure that runs Kubernetes on top of AWS at large scale (for nearly 200 delivery teams), and with automation. As such, we're creating our own tooling to support this new infrastructure. We couldn't find an existing ingress controller that operates like this one does, so we created one ourselves.

We're using this ingress controller with Skipper, an HTTP router that Zalandohas used in production since Q4 2015 as part of its front-end microservices architecture. Skipper's also opensource and has some outstanding features, that wedocumented here. Feelfree to use it, or use another ingress of your choosing.

How It Works

This controller continuously polls the API server to check for ingress resources. It runs an infinite loop. Foreach cycle it creates load balancers for new ingress resources, and deletes the load balancers for obsolete/removedingress resources.

This is achieved using AWS CloudFormation. For more details check our CloudFormation Documentation

The controller will not manage the security groups required to allow access from the Internet to the load balancers.It assumes that their lifecycle is external to the controller itself.

During startup phase EC2 filters are constructed as follows:

  • If CUSTOM_FILTERS environment variable is set, it is used to generate filters that are later usedto fetch instances from EC2.
  • If CUSTOM_FILTERS environment variable is not set or could not be parsed, then defaultfilters are tag:kubernetes.io/cluster/<cluster-id>=owned tag-key=k8s.io/role/node where <cluster-id>is determined from EC2 tags of instance on which Ingress Controller pod is started.

CUSTOM_FILTERS is a list of filters separated by spaces. Each filter has a form of name=value where name can be a tag: or tag-key: prefixed expression, as would be recognized by the EC2 API, and value is value of a filter, or a comma seperated list of values.

For example:

  • tag-key=test will filter instances that have a tag named test, ignoring the value.
  • tag:foo=bar' will filter instances that have a tag named foo with the value bar
  • tag:abc=def,ghi will filter instances that have a tag named abc with the value def OR ghi
  • Default filter tag:kubernetes.io/cluster/<cluster-id>=owned tag-key=k8s.io/role/node filters instancesthat has tag kubernetes.io/cluster/<cluster-id> with value owned and have tag named tag-key=k8s.io/role/node.

Every poll cycle EC2 is queried with filters that were constructed during startup.Each new discovered instance is scanned for Auto Scaling Group tag. Each TargetGroup created by this Ingress controller is then added to each known Auto Scaling Group.Each Auto Scaling Group information is fetched only once when first node of it is discovered for first time.If instance does not belong to Auto Scaling Group (does not have aws:autoscaling:groupName tag) it is stored in separate list ofSingle Instances. On each cycle instances on this list are registered as targets in all Target Groups managed by this controller.If call to get instances from EC2 did not return previously known Single Instance, it is deregistered from Target Group and removed from list of Single Instances.Call to deregister instances is aggregated so that maximum 1 call to deregister is issued in poll cycle.

For Auto Scaling Groups, the controller will always try to build a list ofowned Auto Scaling Groups based on the tag:kubernetes.io/cluster/<cluster-id>=owned even if this tag is not specified inthe CUSTOM_FILTERS configuration. Tracking the owned Auto Scaling Groups isdone to automatically deregister any ASGs which are no longer targeted by theCUSTOM_FILTERS.

Discovery

On startup, the controller discovers the AWS resources required for the controller operations:

  1. The Security Group

    Lookup of the kubernetes.io/cluster/<cluster-id> tag of the Security Group matching the clusterID for the controller node and kubernetes:application matching the value kube-ingress-aws-controller or as fallback for <v0.4.0tag aws:cloudformation:logical-id matching the value IngressLoadBalancerSecurityGroup (only clusters created by CF).

  2. The Subnets

    Subnets are discovered based on the VPC of the instance where thecontroller is running. By default it will try to select all subnets of theVPC but will limit the subnets to one per Availability Zone. If there aremany subnets within the VPC it's possible to tag the desired subnets withthe tags kubernetes.io/role/elb (for internet-facing ALBs) orkubernetes.io/role/internal-elb (for internal ALBs). Subnets with thesetags will be favored when selecting subnets for the ALBs.Additionally you can tag EC2 subnets withkubernetes.io/cluster/<cluster-id>, which will be prioritized.If there are two possible subnets for a single Availability Zone then thefirst subnet, lexicographically sorted by ID, will be selected.

Creating Load Balancers

When the controller learns about new ingress resources, it uses the hosts specified in it to automatically determinethe most specific, valid certificates to use. The certificates has to be valid for at least 7 days. An example ingress:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: test-app
spec:
  rules:
  - host: test-app.example.org
    http:
      paths:
      - backend:
          serviceName: test-app-service
          servicePort: main-port

The Application Load Balancer created by the controller will have both an HTTP listener and an HTTPS listener. Thelatter will use the automatically selected certificates.

By default the ingress-controller will aggregate all ingresses under as fewApplication Load Balancers as possible (unless running with--disable-sni-support). If you like to provision an Application Load Balancerthat is unique for an ingress you can use the annotationzalando.org/aws-load-balancer-shared: "false".

The new Application Load Balancers have a custom tag marking them as managed load balancers to differentiate themfrom other load balancers. The tag looks like this:

`kubernetes:application` = `kube-ingress-aws-controller`

They also share the kubernetes.io/cluster/<cluster-id> tag with other resources from the cluster where it belongs.

Create a Load Balancer with a pinned certificate

As a second option you can specify the Amazon Resource Name (ARN)of the desired certificate with an annotation like the one shown here:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: myingress
  annotations:
    zalando.org/aws-load-balancer-ssl-cert: arn:aws:acm:eu-central-1:123456789012:certificate/f4bd7ed6-bf23-11e6-8db1-ef7ba1500c61
spec:
  rules:
  - host: test-app.example.org
    http:
      paths:
      - backend:
          serviceName: test-app-service
          servicePort: main-port

Create an internal Load Balancer

You can select the Application Load Balancer Schemewith an annotation like the one shown here:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: myingress
  annotations:
    zalando.org/aws-load-balancer-scheme: internal
spec:
  rules:
  - host: test-app.example.org
    http:
      paths:
      - backend:
          serviceName: test-app-service
          servicePort: main-port

You can only select from internet-facing (default) and internal options.

Omit to create a Load Balancer for cluster internal domains

Since >=v0.10.5, you can create Ingress objects with host rules,that have the .cluster.local and the controller will not create anALB for this.

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: myingress
spec:
  rules:
  - host: test-app.skipper.cluster.local
    http:
      paths:
      - backend:
          serviceName: test-app-service
          servicePort: main-port

If you pass --cluster-local-domain=".cluster.local", you can changewhat domain is considered cluster internal. If you're using the denyinternal traffic feature, you mightwant to sync this configuration with the --internal-domains one.

Deny traffic for internal domains

Since >=v0.11.18 the controller supports the flag--deny-internal-domains. It's a boolean config item that when enabledconfigures the ALBs' cloudformation templates with aAWS::ElasticLoadBalancingV2::ListenerRule resource.This rule will be configured with the conditionvalues from the --internal-domains flag and theaction fixedresponseconfig with the respective response--deny-internal-domains-response flags. This feature is not enabled bydefault. The following are the default values to its config flags:

  • internal-domains: *.cluster.local
  • deny-internal-domains: false (same as explicitly passing--no-deny-internal-domains)
  • deny-internal-domains-response: Unauthorized
  • deny-internal-domains-response-content-type: text/plain
  • deny-internal-domains-response-status-code: 401

Note that --internal-domains differs from --cluster-local-domain,which is used exclusively to avoid load balancers creation for thecluster internaldomain.The --internal-domains flag can be set multiple times and accept AWS'wildcard characters. Check the AWS' docs on the Host Headerconfig for more details.

This feature is not supported by NLBs.

Example:

Running the controller with --deny-internal-domains and--internal-domains=*.cluster.local will generate a rule in the ALBthat matches any request to domains ending in .cluster.local and answerthe request with an HTTP 401 Unauthorized.

Create Load Balancer with SSL Policy

You can select the defaultSSLPolicy,with the flag --ssl-policy=ELBSecurityPolicy-TLS-1-2-2017-01. Thischoice can be overriden by the Kubernetes Ingress annotationzalando.org/aws-load-balancer-ssl-policy to any valid value. Validvalues will be checked by the controller.

Example:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: myingress
  annotations:
    zalando.org/aws-load-balancer-ssl-policy: ELBSecurityPolicy-FS-2018-06
spec:
  rules:
  - host: test-app.example.org
    http:
      paths:
      - backend:
          serviceName: test-app-service
          servicePort: main-port

Create Load Balancer with SecurityGroup

The controller will normally automatically detect the SecurityGroup touse. Auto detection is done by filtering all SecurityGroups with AWSTags. The kubernetes.io/cluster/<cluster-id> tag of the SecurityGroup should match clusterID for the controller node with valueowned and kubernetes:application tag should match the valuekube-ingress-aws-controller.

If you want to override the detected SecurityGroup, you can set aSecurityGroup of your choice with thezalando.org/aws-load-balancer-security-group annotation like theshown here:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: myingress
  annotations:
    zalando.org/aws-load-balancer-security-group: sg-somegroupeid
spec:
  rules:
  - host: test-app.example.org
    http:
      paths:
      - backend:
          serviceName: test-app-service
          servicePort: main-port

Create Load Balancers with WAF associations

It is possible to define WAF associations for the created load balancers. The WAF Web ACLs need to be createdseparately via CloudFormation or the AWS Console, and they can be referenced either as a global startupconfiguration of the controller, or as ingress specific settings in the ingress object with an annotation. Theingress annotation overrides the global setting, and the controller will create separate load balancers forthose ingresses using a separate WAF association.

The controller supports two versions of AWS WAF:

  • WAF (v1 or "classic"): the Web ACL is identified by a UUID
  • WAFv2: the Web ACL is identified by its ARN, prefixed with arn:aws:wafv2:

Only one WAF association can be used for a load balancer, and the same command line flag and ingress annotationis used for both versions, only the format of the value differs.

Starting the controller with global WAF association:
kube-ingress-aws-controller --aws-waf-web-acl-id=arn:aws:wafv2:eu-central-1:123456789012:regional/webacl/test-waf-acl/12345678-abcd-efgh-ijkl-901234567890
Setting ingress specicif WAF association:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: myingress
  annotations:
    zalando.org/aws-waf-web-acl-id: arn:aws:wafv2:eu-central-1:123456789012:regional/webacl/test-waf-acl/12345678-abcd-efgh-ijkl-901234567890
spec:
  rules:
  - host: test-app.example.org
    http:
      paths:
      - backend:
          serviceName: test-app-service
          servicePort: main-port

Deleting load balancers

When the controller detects that a managed load balancer for the current cluster doesn't have a matching ingressresource anymore, it deletes all the previously created resources.

Deletion may take up to about 30 minutes. This ensures proper draining of connections on the lodadbalancers and allows for DNS TTLs to expire.

Building

This project provides a Makefilethat you can use to build either a binary or a Docker image.

Building a Binary

To build a binary for the Linux operating system, simply run make or make build.linux.

Building a Docker Image

To create a Docker image instead, execute make build.docker. You can then push your Docker image to the Dockerregistry of your choice.

Deploy

To deploy the ingress controller, use theexample YAML as the descriptor.You can customize the image used in the example YAML file.

We provide registry.opensource.zalan.do/teapot/kube-ingress-aws-controller:latest as a publicly usable Docker imagebuilt from this codebase. You can deploy it with 2 easy steps:

  • Replace the placeholder for your region inside the example YAML, for ex., eu-west-1
  • Use kubectl to execute the command kubectl apply -f deploy/ingress-controller.yaml

If you use Kops to create yourcluster, please use our deployment guide for Kops

Running multiple instances

In some cases it might be useful to run multiple instances of this controller:

  • Isolating internal vs external traffic
  • Using a different set of traffic processing nodes
  • Using different frontend routers (e.g.: Skipper and Traefik)

You can use the flag --controller-id to set a token that will be used to isolate resources between controller instances.This value will be used to tag those resources.

If you don't pass an ID, the default kube-ingress-aws-controller will be used.

Usually you would want to combine this flag with ingress-class-filter so different types of ingresses are associated with the different controllers.To make kube-ingress-aws-controller manage both specific ingress class and an empty one (or ingresses without ingress class annotation) add an empty class to the list. For example to manage ingress class foo and ingresses without class set parameter like this --ingress-class-filter=foo, (notice the comma in the end).

Target and Health Check Ports

By default the port 9999 is used as both health check and target port. Thismeans that Skipper or any other traffic router you're using needs to belistening on that port.

If you want to change the default ports, you can control it using the-target-port and -health-check-port flags.

If you want to use an HTTPS enabled target port, use the -target-https flag.This will only affect ALBs, NLBs ignore this flag.

HTTP to HTTPS Redirection

By default, the controller will expose both HTTP and HTTPS ports on the load balancer, and forward both listeners to the target port. Setting the flag -redirect-http-to-https will instead configure the HTTP listener to emit a 301 redirect for any request received, with the destination location being the same URL but with the HTTPS scheme vs. HTTP. The specifics are described in the relevant aws documentation.

Backward Compatibility

The controller used to have only the --health-check-port flag available, and would use the same port as health check and the target port.Those ports are now configured individually. If you relied on this behavior, please include the --target-port in your configuration.

Trying it out

The Ingress Controller's responsibility is limited to managing load balancers, as described above. To have a fullyfunctional setup, additionally to the ingress controller, you can use Skipperto route the traffic to the application. The setup follows what'sdescribed here.

You can deploy skipper as a DaemonSet using another example YAML by executing the following command:

kubectl apply -f deploy/skipper.yaml

To complete the setup, you'll need to fulfill some additional requirements regarding security groups and IAMroles; more info here.

DNS

To have convenient DNS names for your application, you can use theKubernetes-Incubator project, external-dns.It's not strictly necessary for this Ingress Controller to work,though.

Contributing

We welcome your contributions, ideas and bug reports via issues and pull requests;here are those Contributor guidelines again.

Contact

Check our MAINTAINERS file for email addresses.

Security

We welcome your security reports please checkout ourSECURITY.md.

License

The MIT License (MIT) Copyright © [2017] Zalando SE, https://tech.zalando.com

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

  • 一、前置条件 在AWS EKS中service默认的LoadBalance模式是CLB。 如果需要使用ALB或NLB则需要安装AWS Load Balancer Controller 安装AWS负载均衡器控制器前需要为集群创建IAM OIDC提供商 二、创建IAM OIDC提供商 确定集群是否拥有现有 IAM OIDC 提供商。 查看集群的 OIDC 提供商 URL。 aws eks descri

  • 本文对kube-proxy做了一些总结说明,对其内部的实现原理进行了研究,并对userspace和iptables两种mode的缺点进行的描述,都通过例子说明了iptable的工作。在下一篇博文中,我将对k8s v1.5中kube-proxy的源码进行分析,有兴趣的同学可以关注。 kube-proxy和service背景 说到kube-proxy,就不得不提到k8s中service,下面对它们两做

  • 国内eks创建nlb + nginx-ingress-controller 部署时会pull不到镜像 相关文档 nginx-ingress-controller for aws eks:https://github.com/kubernetes/ingress-nginx/tree/main/deploy/static/provider/aws eks nlb 参数:https://kuberne

 相关资料
  • Kubernetes on AWS (kube-aws) Note: The master branch may be in an unstable or even broken state during development. Please use releases instead of the master branch in order to get stable binaries. ku

  • 申请 Domain Name 首先就是申请一个你要的网域, 这边网路上资源很多都可以查一下哪个网域商或是一些相关的建议, 这边我就先不去多做介绍了, 文章中会以 sam.nctu.me 来作范例 用 Letsencrypt 来签发凭证 这边我们用手动的把它先签下来, 上 Letsencrypt 去安装 certbot, 手动签的方式也可以参考签 letsencrpyt 凭证 文章, 输入以下指令来

  • 虽然 minikube 支持 LoadBalancer 类型的服务,但它并不会创建外部的负载均衡器,而是为这些服务开放一个 NodePort。这在使用 Ingress 时需要注意。 本节展示如何在 minikube 上开启 Ingress Controller 并创建和管理 Ingress 资源。 启动 Ingress Controller minikube 已经内置了 ingress addon

  • Traefik 是一款开源的反向代理与负载均衡工具,它监听后端的变化并自动更新服务配置。Traefik 最大的优点是能够与常见的微服务系统直接整合,可以实现自动化动态配置。目前支持 Docker、Swarm,、Mesos/Marathon、Mesos、Kubernetes、Consul、Etcd、Zookeeper、BoltDB 和 Rest API 等后端模型。 主要功能包括 Golang编写,

  • 我有一个简单的任务,但即使研究了几十篇文章也没有解决。 有一个简单的AWS EKS集群,它是使用eksctl、ElasticIP从演示模板创建的,安装时未做任何更改https://bitnami.com/stack/nginx-ingress-controller/helm 有一个域https://stage.mydomain.com我想转发到ElasticIP,使用DNS A记录,在AWS EK

  • 我们正在研究kubernetes可用的各种开源入口控制器,需要从中选择最好的一个。我们正在评估以下四个入口控制器 Nginx入口控制器 这些在特性和性能方面有什么区别,在生产中应该采用哪一种。请提供您的建议