CNI (Container Network Interface), a Cloud Native Computing Foundation project, consists of a specification and libraries for writing plugins to configure network interfaces in Linux containers, along with a number of supported plugins.CNI concerns itself only with network connectivity of containers and removing allocated resources when the container is deleted.Because of this focus, CNI has a wide range of support and the specification is simple to implement.
As well as the specification, this repository contains the Go source code of a library for integrating CNI into applications and an example command-line tool for executing CNI plugins. A separate repository contains reference plugins and a template for making new plugins.
The template code makes it straight-forward to create a CNI plugin for an existing container networking project.CNI also makes a good framework for creating a new container networking project from scratch.
Here are the recordings of two sessions that the CNI maintainers hosted at KubeCon/CloudNativeCon 2019:
Application containers on Linux are a rapidly evolving area, and within this area networking is not well addressed as it is highly environment-specific.We believe that many container runtimes and orchestrators will seek to solve the same problem of making the network layer pluggable.
To avoid duplication, we think it is prudent to define a common interface between the network plugins and container execution: hence we put forward this specification, along with libraries for Go and a set of plugins.
The CNI team also maintains some core plugins in a separate repository.
We welcome contributions, including bug reports, and code and documentation improvements.If you intend to contribute to code or documentation, please read CONTRIBUTING.md. Also see the contact section in this README.
The CNI spec is language agnostic. To use the Go language libraries in this repository, you'll need a recent version of Go. You can find the Go versions covered by our automated tests in .travis.yaml.
The CNI project maintains a set of reference plugins that implement the CNI specification.NOTE: the reference plugins used to live in this repository but have been split out into a separate repository as of May 2017.
After building and installing the reference plugins, you can use the priv-net-run.sh
and docker-run.sh
scripts in the scripts/
directory to exercise the plugins.
note - priv-net-run.sh depends on jq
Start out by creating a netconf file to describe a network:
$ mkdir -p /etc/cni/net.d
$ cat >/etc/cni/net.d/10-mynet.conf <<EOF
{
"cniVersion": "0.2.0",
"name": "mynet",
"type": "bridge",
"bridge": "cni0",
"isGateway": true,
"ipMasq": true,
"ipam": {
"type": "host-local",
"subnet": "10.22.0.0/16",
"routes": [
{ "dst": "0.0.0.0/0" }
]
}
}
EOF
$ cat >/etc/cni/net.d/99-loopback.conf <<EOF
{
"cniVersion": "0.2.0",
"name": "lo",
"type": "loopback"
}
EOF
The directory /etc/cni/net.d
is the default location in which the scripts will look for net configurations.
Next, build the plugins:
$ cd $GOPATH/src/github.com/containernetworking/plugins
$ ./build_linux.sh # or build_windows.sh
Finally, execute a command (ifconfig
in this example) in a private network namespace that has joined the mynet
network:
$ CNI_PATH=$GOPATH/src/github.com/containernetworking/plugins/bin
$ cd $GOPATH/src/github.com/containernetworking/cni/scripts
$ sudo CNI_PATH=$CNI_PATH ./priv-net-run.sh ifconfig
eth0 Link encap:Ethernet HWaddr f2:c2:6f:54:b8:2b
inet addr:10.22.0.2 Bcast:0.0.0.0 Mask:255.255.0.0
inet6 addr: fe80::f0c2:6fff:fe54:b82b/64 Scope:Link
UP BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:1 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:1 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:90 (90.0 B) TX bytes:0 (0.0 B)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
The environment variable CNI_PATH
tells the scripts and library where to look for plugin executables.
Use the instructions in the previous section to define a netconf and build the plugins.Next, docker-run.sh script wraps docker run
, to execute the plugins prior to entering the container:
$ CNI_PATH=$GOPATH/src/github.com/containernetworking/plugins/bin
$ cd $GOPATH/src/github.com/containernetworking/cni/scripts
$ sudo CNI_PATH=$CNI_PATH ./docker-run.sh --rm busybox:latest ifconfig
eth0 Link encap:Ethernet HWaddr fa:60:70:aa:07:d1
inet addr:10.22.0.2 Bcast:0.0.0.0 Mask:255.255.0.0
inet6 addr: fe80::f860:70ff:feaa:7d1/64 Scope:Link
UP BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:1 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:1 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:90 (90.0 B) TX bytes:0 (0.0 B)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
CNI currently covers a wide range of needs for network configuration due to its simple model and API.However, in the future CNI might want to branch out into other directions:
If these topics are of interest, please contact the team via the mailing list or IRC and find some like-minded people in the community to put a proposal together.
The plugins moved to a separate repo:https://github.com/containernetworking/plugins, and the releases thereinclude binaries and checksums.
Prior to release 0.7.0 the cni
release also included a cnitool
binary; as this is a developer tool we suggest you build it yourself.
For any questions about CNI, please reach out via:
If you have a security issue to report, please do so privately to the email addresses listed in the MAINTAINERS file.
CNI插件之CNI插件最简实现之macvlan plugin Kubernetes的CNI插件非常智能,通过几个yaml文件就可以完成整套安装。这是kubernet的优势,但是对于想要了解原理的同学,太智能了,反而感觉无从下手。 当我开始学习CNI插件的时候也有这般感受。当时我的第一个想法是找一个最最最简单的CNI插件,快速的了解CNI的原理。后来反复对比,选择了macvlan。一是macvlan
https://github.com/containernetworking/cni/blob/master/pkg/skel/skel.go skel为cni提供了统一的框架,解析下面六个环境变量和标准输入内容(每个cni插件的配置文件) CNI_COMMAND -- 只支持这四个命令: ADD,DEL,CHECK,VERSION CNI_CONTAINERID CNI_NETNS CNI_
前言: 本文对CNI标准v1.0.0版本进行了翻译,翻译出来感觉和自己看的时候还是有点不一样,毕竟直接理解跟翻译成文本还是有一定的区别,所以如果有什么译错的地方可以直接联系修改。 标准原文链接:https://github.com/containernetworking/cni/blob/spec-v1.0.0/SPEC.md 个人博客链接:https://www.gogo-dev.com/ind
Flannel 介绍与原理 Flannel 有以下几种工作模式 推荐的模式: VXLAN host-gw WireGuard UDP 其中:vxlan 和 udp 是 overlay 模式,udp 一般是 linux 内核不支持 vxlan 功能时的选择,udp 是 flanneld 进行解封包,而 vxlan 是内核来进行解封包;flanneld 是用户态进程,那么 udp 模式下涉及内核态到用
https://github.com/containernetworking/cni/blob/spec-v0.4.0/SPEC.md https://kubernetes.io/zh/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/ # CNI官方文档: https://www.cni.dev/docs/s
sudo cp/etc/kubernetes/admin.conf$home/sudo chown$(id-u):$(id-g)$home/admin.conf导出kubeconfig=$home/admin.conf 设置组织 kubectl apply--文件名https://git.io/weave-kube-1.6 验证主模式是否已就绪,以及dns吊舱是否正在运行。 没有错误的连接节点。
下面是我用来为k8s启用calico CNI的清单文件,吊舱能够通过ipv4通信,但我无法使用ipv6,k8s版本1.14和calico版本3.11到达外部,我是否缺少一些设置, 在“sysctl-w net.ipv6.conf.all.forwarding=1”的主机上启用转发
我运行的是vanilla EKS Kubernetes版本1.12。 我已经使用CNI Genie允许自定义选择的CNI,豆荚使用时启动,我已经安装了标准的Calico CNI设置。 但是,如果一个吊舱移动到不同的工作节点,那么它们之间的网络在集群内就不能工作。 我检查了calico auto配置的worker节点上的路由表,在我看来这是合乎逻辑的。 下面是我在所有名称空间中的广泛的pod列表:
客户端版本:version.info{Major:“1”,Minor:“20”,GitVersion:“V1.20.1”,GitCommit:“C4D752765B3BBBA2237BF87CF0B1C2E307844666”,GitTreest:“Clean”,BuildDate:“2020-12-19T11:45:27Z”,GoVersion:“GO1.15.5”,编译器:“GC”,平台:“L
它抱怨资源版本太旧。如何升级版本?
错误: 警告FailedCreatePodSandBox 34M kubelet,从节点未能创建pod沙箱:rpc错误:code=未知desc=未能为pod设置沙箱容器“26cdaf3170806455A4731218D20C482BB2A41DED6EF85C90B560058E332DF684”网络“label-demo”:networkPlugin cni未能设置pod“label-demo
我用cni插件法兰绒从kubeadm安装了kubernetes V1.11.5,一切正常。但我尝试改用印花布后,发现跨机吊舱通讯坏了。所以我换回法兰绒。但是在创建pod时得到错误消息: 看来我需要重置cni网络?但我不知道如何解决这个问题。 我的法兰绒和印花布安装是遵循kubeadm指令零配置更新。