Virtual Kubelet is an open source Kubernetes kubeletimplementation that masquerades as a kubelet for the purposes of connecting Kubernetes to other APIs.This allows the nodes to be backed by other services like ACI, AWS Fargate, IoT Edge, Tensile Kube etc. The primary scenario for VK is enabling the extension of the Kubernetes API into serverless container platforms like ACI and Fargate, though we are open to others. However, it should be noted that VK is explicitly not intended to be an alternative to Kubernetes federation.
Virtual Kubelet features a pluggable architecture and direct use of Kubernetes primitives, making it much easier to build on.
We invite the Kubernetes ecosystem to join us in empowering developers to buildupon our base. Join our slack channel named, virtual-kubelet, within the Kubernetes slack group.
The best description is "Kubernetes API on top, programmable back."
The diagram below illustrates how Virtual-Kubelet works.
Virtual Kubelet is focused on providing a library that you can consume in yourproject to build a custom Kubernetes node agent.
See godoc for up to date instructions on consuming this project:https://godoc.org/github.com/virtual-kubelet/virtual-kubelet
There are implementations available for several providers, seethose repos for details on how to deploy.
This project features a pluggable provider interface developers can implementthat defines the actions of a typical kubelet.
This enables on-demand and nearly instantaneous container compute, orchestratedby Kubernetes, without having VM infrastructure to manage and while stillleveraging the portable Kubernetes API.
Each provider may have its own configuration file, and required environmental variables.
Providers must provide the following functionality to be considered a supported integration with Virtual Kubelet.
Admiralty Multi-Cluster Scheduler mutates annotated pods into "proxy pods" scheduled on a virtual-kubelet node and creates corresponding "delegate pods" in remote clusters (actually running the containers). A feedback loop updates the statuses and annotations of the proxy pods to reflect the statuses and annotations of the delegate pods. You can find more details in the Admiralty Multi-Cluster Scheduler documentation.
Alibaba Cloud ECI(Elastic Container Instance) is a service that allow you run containers without having to manage servers or clusters.
You can find more details in the Alibaba Cloud ECI provider documentation.
The alibaba ECI provider will read configuration file specified by the --provider-config
flag.
The example configure file is in the ECI provider repository.
The Azure Container Instances Provider allows you to utilize bothtypical pods on VMs and Azure Container instances simultaneously in thesame Kubernetes cluster.
You can find detailed instructions on how to set it up and how to test it in the Azure Container Instances Provider documentation.
The Azure connector can use a configuration file specified by the --provider-config
flag.The config file is in TOML format, and an example lives in providers/azure/example.toml
.
AWS Fargate is a technology that allows you to run containerswithout having to manage servers or clusters.
The AWS Fargate provider allows you to deploy pods to AWS Fargate.Your pods on AWS Fargate have access to VPC networking with dedicated ENIs in your subnets, publicIP addresses to connect to the internet, private IP addresses to connect to your Kubernetes cluster,security groups, IAM roles, CloudWatch Logs and many other AWS services. Pods on Fargate canco-exist with pods on regular worker nodes in the same Kubernetes cluster.
Easy instructions and a sample configuration file is available in the AWS Fargate provider documentation. Please note that this provider is not currently supported.
Kip is a provider that runs pods in cloud instances, allowing a Kubernetes cluster to transparently scale workloads into a cloud. When a pod is scheduled onto the virtual node, Kip starts a right-sized cloud instance for the pod's workload and dispatches the pod onto the instance. When the pod is finished running, the cloud instance is terminated.
When workloads run on Kip, your cluster size naturally scales with the cluster workload, pods are strongly isolated from each other and the user is freed from managing worker nodes and strategically packing pods onto nodes.
HashiCorp Nomad provider for Virtual Kubelet connects your Kubernetes clusterwith Nomad cluster by exposing the Nomad cluster as a node in Kubernetes. Byusing the provider, pods that are scheduled on the virtual Nomad noderegistered on Kubernetes will run as jobs on Nomad clients as theywould on a Kubernetes node.
For detailed instructions, follow the guide here.
Liqo implements a provider for Virtual Kubelet designed to transparently offload pods and services to "peered" Kubernetes remote cluster. Liqo is capable of discovering neighbor clusters (using DNS, mDNS) and "peer" with them, or in other words, establish a relationship to share part of the cluster resources. When a cluster has established a peering, a new instance of the Liqo Virtual Kubelet is spawned to seamlessly extend the capacity of the cluster, by providing an abstraction of the resources of the remote cluster. The provider combined with the Liqo network fabric extends the cluster networking by enabling Pod-to-Pod traffic and multi-cluster east-west services, supporting endpoints on both clusters.
For detailed instruction, follow the guide here
OpenStack Zun provider for Virtual Kubelet connectsyour Kubernetes cluster with OpenStack in order to run Kubernetes pods on OpenStack Cloud.Your pods on OpenStack have access to OpenStack tenant networks because they have Neutron portsin your subnets. Each pod will have private IP addresses to connect to other OpenStack resources(i.e. VMs) within your tenant, optionally have floating IP addresses to connect to the internet,and bind-mount Cinder volumes into a path inside a pod's container.
./bin/virtual-kubelet --provider="openstack"
For detailed instructions, follow the guide here.
Tensile kube is contributed by tencentgames, which is provider for Virtual Kubelet connects your Kubernetes cluster with other Kubernetes clusters. This provider enables us extending Kubernetes to an unlimited one. By using the provider, pods that are scheduled on the virtual node registered on Kubernetes will run as jobs on other Kubernetes clusters' nodes.
Providers consume this project as a library which implements the core logic ofa Kubernetes node agent (Kubelet), and wire up their implementation forperforming the neccessary actions.
There are 3 main interfaces:
When pods are created, updated, or deleted from Kubernetes, these methods arecalled to handle those actions.
type PodLifecycleHandler interface {
// CreatePod takes a Kubernetes Pod and deploys it within the provider.
CreatePod(ctx context.Context, pod *corev1.Pod) error
// UpdatePod takes a Kubernetes Pod and updates it within the provider.
UpdatePod(ctx context.Context, pod *corev1.Pod) error
// DeletePod takes a Kubernetes Pod and deletes it from the provider.
DeletePod(ctx context.Context, pod *corev1.Pod) error
// GetPod retrieves a pod by name from the provider (can be cached).
GetPod(ctx context.Context, namespace, name string) (*corev1.Pod, error)
// GetPodStatus retrieves the status of a pod by name from the provider.
GetPodStatus(ctx context.Context, namespace, name string) (*corev1.PodStatus, error)
// GetPods retrieves a list of all pods running on the provider (can be cached).
GetPods(context.Context) ([]*corev1.Pod, error)
}
There is also an optional interface PodNotifier
which enables the provider toasynchronously notify the virtual-kubelet about pod status changes. If thisinterface is not implemented, virtual-kubelet will periodically check the statusof all pods.
It is highly recommended to implement PodNotifier
, especially if you planto run a large number of pods.
type PodNotifier interface {
// NotifyPods instructs the notifier to call the passed in function when
// the pod status changes.
//
// NotifyPods should not block callers.
NotifyPods(context.Context, func(*corev1.Pod))
}
PodLifecycleHandler
is consumed by the PodController
which is the corelogic for managing pods assigned to the node.
pc, _ := node.NewPodController(podControllerConfig) // <-- instatiates the pod controller
pc.Run(ctx) // <-- starts watching for pods to be scheduled on the node
NodeProvider is responsible for notifying the virtual-kubelet about node statusupdates. Virtual-Kubelet will periodically check the status of the node andupdate Kubernetes accordingly.
type NodeProvider interface {
// Ping checks if the node is still active.
// This is intended to be lightweight as it will be called periodically as a
// heartbeat to keep the node marked as ready in Kubernetes.
Ping(context.Context) error
// NotifyNodeStatus is used to asynchronously monitor the node.
// The passed in callback should be called any time there is a change to the
// node's status.
// This will generally trigger a call to the Kubernetes API server to update
// the status.
//
// NotifyNodeStatus should not block callers.
NotifyNodeStatus(ctx context.Context, cb func(*corev1.Node))
}
Virtual Kubelet provides a NaiveNodeProvider
that you can use if you do notplan to have custom node behavior.
NodeProvider
gets consumed by the NodeController
, which is core logic formanaging the node object in Kubernetes.
nc, _ := node.NewNodeController(nodeProvider, nodeSpec) // <-- instantiate a node controller from a node provider and a kubernetes node spec
nc.Run(ctx) // <-- creates the node in kubernetes and starts up he controller
One of the roles of a Kubelet is to accept requests from the API server forthings like kubectl logs
and kubectl exec
. Helpers for setting this up areprovided here
Running the unit tests locally is as simple as make test
.
Check out test/e2e
for more details.
Kubernetes 1.9 introduces a new flag, ServiceNodeExclusion
, for the control plane's Controller Manager. Enabling this flag in the Controller Manager's manifest allows Kubernetes to exclude Virtual Kubelet nodes from being added to Load Balancer pools, allowing you to create public facing services with external IPs without issue.
Cluster requirements: Kubernetes 1.9 or above
Enable the ServiceNodeExclusion flag, by modifying the Controller Manager manifest and adding --feature-gates=ServiceNodeExclusion=true
to the command line arguments.
Virtual Kubelet follows the CNCF Code of Conduct.Sign the CNCF CLA to be able to make Pull Requests to this repo.
Monthly Virtual Kubelet Office Hours are held at 10am PST on the last Thursday of every month in this zoom meeting room. Check out the calendar here.
Our google drive with design specifications and meeting notes are here.
We also have a community slack channel named virtual-kubelet in the Kubernetes slack. You can also connect with the Virtual Kubelet community via the mailing list.
Virtual Kubelet 是一个开源的 Kubernetes kubelet 实现,它伪装成 kubelet,目的是将 Kubernetes 连接到其他 API,这允许节点得到其他服务(如 ACI、AWS Fargate、IoT Edge 等)的支持。Virtual Kubelet 的主要场景是将Kubernetes API 扩展到无服务器的容器平台(如 ACI 和 Fargate )。 V
Virtual DOM(虚拟DOM)是Nerv从React上继承的一项非常优秀的技术思想。 DOM,文档对象模型,它是JavaScript与页面元素交互的桥梁,可以让我们使用JavaScript去方便地修改、添加页面元素。但我们都知道,直接操作DOM性能较差,人为去操作DOM可能会产生一些性能糟糕的修改,而且手工去修改DOM、维护DOM状态的成本非常高,这些都非常令人痛苦。 而为了解决操作DOM的
The MoveVM executes transactions expressed in the Move bytecode. There are two main crates: the core VM and the VM runtime. The VM core contains the low-level data type for the VM - mostly the file fo
描述 (Description) 虚拟列表是一种列表视图,其中包含大量数据元素而不会降低其性能。 您可以使用virtual-list类和list-block类创建虚拟列表的HTML布局。 初始化虚拟列表 您可以使用以下方法初始化虚拟列表 - myApp.virtualList(listBlockContainer, parameters) 该方法包含以下参数 - listBlockContain
Inbound Traffic Handling Name Description Self IPs 使用 external self IP 可以访问管理界面(需要配置 Port Lockdown,允许 443 端口)。查看外部 vlan Self IP # list net self 10.1.10.240 net self 10.1.10.240 { address 10.1.10.2
Virtual Router 支持 Windows 7+,是一个将你带无线 WIFI 的 PC 变成一个虚拟的 WIFI 路由器,可用来创建一个新的 AP 访问点。使用 WPA2 加密,并且无法关闭加密,因为这是 Windows 7 和 Windows 2008 R2 的 Wireless Hosted Network API 规定的安全策略。