当前位置: 首页 > 工具软件 > kube-router > 使用案例 >

CNCF kube-router 学习

庾和昶
2023-12-01

kube-router

安装

  1. 不同模式下安装方式不同,参见

https://github.com/cloudnativelabs/kube-router/blob/master/docs/user-guide.md

  1. 使用功能模式
  2. a)pod networking and network policy
  3. b)providing service proxy, firewall and pod networking
  4. c)注意如果启用了proxy,需要clean kube-proxy
  5. d)对应flag flags --run-firewall, --run-router, --run-service-proxy
  6. 运行模式
  7. a)running as daemonset-不同kubernetes部署方式(kubeadm,generic),不同yaml
  8. b)running as agent

kube-router --master=http://192.168.1.99:8080/ --run-firewall=true --run-service-proxy=false --run-router=false

可以取代kube-proxy

kubectl -n kube-system delete ds kube-proxy

docker run --privileged -v /lib/modules:/lib/modules --net=host k8s.gcr.io/kube-proxy-amd64:v1.15.1 kube-proxy --cleanup

kube-router --master=http://192.168.1.99:8080/ --run-service-proxy=true --run-firewall=false --run-router=false

requirements

  1. need to access kubernetes API server to get information on pods, services, endpoints, network policies
  2. ipset package must be installed on each of the nodes if use kube-router as agent
  3. Kubernetes controller manager need to be configured to allocate pod CIDRs by passing --allocate-node-cidrs=true flag and providing a cluster-cidr

kubeadm 默认开启

 kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf --bind-address=127.0.0.1 --client-ca-file=/etc/kubernetes/pki/ca.crt --cluster-cidr=10.244.0.0/16 --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt --cluster-signing-key-file=/etc/kubernetes/pki/ca.key --controllers=\*,bootstrapsigner,tokencleaner --kubeconfig=/etc/kubernetes/controller-manager.conf --leader-elect=true --node-cidr-mask-size=24 --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt --root-ca-file=/etc/kubernetes/pki/ca.crt --service-account-private-key-file=/etc/kubernetes/pki/sa.key --service-cluster-ip-range=10.96.0.0/12 --use-service-account-credentials=true

Kubernetes version below v1.15, both kube-apiserver and kubelet must be run with --allow-privileged=true option

On each node CNI conf file is expected to be present as /etc/cni/net.d/10-kuberouter.conf .bridge CNI plugin and host-local for IPAM should be used.

运行时

/etc/cni/net.d/10-kuberouter.conflist

Advertising IPs(广播IP)

  1. locally adding the advertised IPs to the nodes' kube-dummy-if network interface
  2. advertising the IPs to its BGP peers
  3. set the default for all services use the --advertise-cluster-ip, --advertise-external-ip and --advertise-loadbalancer-ip flags
  4. selectively enable or disable this feature per-service use the kube-router.io/service.advertise.clusterip, kube-router.io/service.advertise.externalip and kube-router.io/service.advertise.loadbalancerip annotations

Hairpin 模式

  1. 所有发送出去的报文都会经过交换机,交换机作为再发送到对应的目标地址(即使目标地址就是主机上的其他 macvlan 接口),也就是 hairpin mode 模式,这个模式用在交互机上需要做过滤、统计等功能的场景
  2. 需要兼容 802.1Qbg 的交换机支持
  3. brctl hairpin br0 eth1 on
  4. 在 Linux 主机上配置了 Harpin 模式之后,源和目的地址都是本地 Macvlan 接口地址的流量,都会被 br0(假设你创建的 Bridge 是 br0)发回给相应的接口
  5. 由于会带来大量不必要的流量,一般不应该启用:http://chenchun.github.io/network/2017/10/09/hairpin
  6. kube-router.io/service.hairpin= annotation, or for all Services in a cluster by passing the flag --hairpin-mode=true to kube-router:10-kuberouter.conf

{
"name":"mynet",
"type":"bridge",
"bridge":"kube-bridge",
"isDefaultGateway":true,
"hairpinMode":true,
"ipam": {
"type":"host-local"
}
}

Direct server return -LVS直接返回

  1. kubectl annotate service my-service "kube-router.io/service.dsr=tunnel"
  2. kube-router启动时候需要使用docker.sock

volume 绑定:包括initContainers

  • name: run
    mountPath: /var/run/docker.sock
    readOnly: true

host绑定

hostNetwork: true
hostIPC: true
hostPID: true

  1. Scheduling Algorithms
  2. a)支持lvs各种算法
  3. b)For round-robin scheduling use:
    kubectl annotate service my-service "kube-router.io/service.scheduler=rr"
  4. 只支持外部访问,内部不支持:DSR will be applicable only to the external IP's.
  5. 不支持port转换:not support port remapping. So you need to use same port and target port for the service
  6. https://archive.nanog.org/meetings/nanog51/presentations/Monday/NANOG51.Talk45.nanog51-Schaumann.pdf

HostPort support

  1. 宿主机端口支持
  2. conf文件增加
  3. a){
    "type":"portmap",
    "capabilities":{
    "snat":true,
    "portMappings":true
    }
    }
  4. https://blog.51cto.com/juestnow/2417570
 类似资料: