Auto-reconnect
Kube Forwarder watches for connection status and always tries reconnect on failure
Multiple clusters support
Bookmark and forward Kubernetes services from multiple clusters easily like never before
Share bookmarks
Use import and export functionality to share bookmarked services with your team or simply backup it
Zero native dependencies
Use port-forwarding without installing kubectl and avoid explanations to developers on how to use it
Before you start forwarding internal resources to your local machine, you have to add cluster configuration.To do this we have 3 different options in the app:
When you add a new cluster via auto-detection (option 1) or manually using a file(a) selection (option 2), we could parseconfigs and if there are multiple contexts inside we will suggest you to add multiple clusters to the app.Few examples of yaml files we expect to have you could find there
Also, you could add a cluster by filling a form manually (option 3). The form has the following fields:
Name - the name of a cluster withing Kube Forwarder app.
Storing method (Set destination to your kube config or paste it as a text) - the method of storing a config It has two options:
Set a path
- storing a path to the config file. It will be read every time when you forwarding a port. It allowsa user to don't do any changes in Kube Forwarder's settings when a third-party app updates the config file.For example, when azure-cli
updates an access token (#13).Paste as a text
- storing a config just as a yml text.Path (if storing method is Set a path
) - the path to a config file.
Content (if storing method is Paste as a text
) - Yml config as a text.
Current Context (if storing method is Set a path
) - When you use Set a path
, you must select a context from a filewhich will be used to connect to a resource. Let's see an example of a problem that the field solves.
Current context
field.local-cluster
and remote-cluster
.current-context
in the yml file is local-cluster
.Set a path
option.postgres
and successfully forwarded ports for some time.kubectl config use-context remote-cluster
remote-cluster
, not local-cluster
as the user expected,and remote-cluster
couldn't have postgres
resource.So, to avoid the error we should store the current context in a separate field.
Kube Forwarder supports forwarding of all types of resources that supported by kubectl
– Pod, Deployment, Service.
We ask you to fill the form with the following fields:
Cluster Name - pick a cluster from one of the added clusters.
Namespace - the namespace of the resource you plan to forward.
Kind – pick one of the options Pod, Deployment or Service.
Name - name of the Pod, Deployment or Service.
Alias - alternative name of the resource that will be displayed on the homepage(optional)
Port Forwarding
root
Use Custom Local Address - Check this and put an IP address or hostname into the text field touse a different listen address. Putting each service on its own address avoids sharing/collisions betweenservices on cookies and port number. Specify a loopback address like 127.0.x.x
or add entries to yourhosts file like 127.0.1.1 dashboard.production.kbf
and put the assigned name in this column. If blank orunchecked, localhost
/ 127.0.0.1
will be used.
Kube Forwarder allows you export cluster configuration in JSON that you could use to share with your team members or for the backup purpose. You could easily store it on Github. When you export cluster, you could export it with or without confidential information.
brew cask install kube-forwarder
We encourage you to contribute to Kube Forwarder!
We expect contributors to abide by our underlying code of conduct.All conversations and discussions on GitHub (issues, pull requests)must be respectful and harassment-free.
This project was generated with electron-vue@8fae476 using vue-cli. Documentation about the original structure can be found here.
git checkout -b feature/that-new-feature
or bug/fixing-that-bug
git commit -m 'Add some feature'
git push origin feature/that-new-feature
.dmg
target)Fork Kube Forwarder repository (https://github.com/pixel-point/kube-forwarder/fork)
# Clone source code
git clone https://github.com/<your-username>/kube-forwarder
# install dependencies
npm install
# prepare .env files
cp .env.example .env
cp .env.example .env.production
# serve with hot reload in Electron Dev app
npm run dev
# serve WEB version with hot reload at localhost:9081
npm run web
Build an application for production
# Build a target for current OS
npm run build
# Build a target for Windows
npm run build -- -- --win
# Build a target for Linux
npm run build -- -- --linux
# You can mix targets
npm run build -- -- --win --linux
# You can build static and target separately
npm run build:dist
npm run build:target -- --win
A built version will be appear in build
directory.
We are using Cypress to run integration tests.There are visual regression tests. It's important to run them inside dockercontainer to get same screenshots as in Drone CI.
npm run test:cypress
Or you can run it manually on a local machine.
# Run the web version to test it
npm run web
# Run this command in a separate terminal tab
npm run test:cypress:onhost
# Or you can open Cypress GUI
npm run test:cypress:open
Q) Node Sass could not find a binding for your current environment: OS X 64-bit with Node.js 12.xA) npm rebuild node-sass
Q) Error: spawn .../kube-forwarder/node_modules/electron/dist/Electron.app/Contents/MacOS/Electron ENOENTA) Reinstall node_modules: rm -rf node_modules && npm i
Also, this steps could be used to configure CI environment.
.env.example
to .env.production
and fill variables.package.json
and Push to release
branch.npm run release
on a Mac computer to build packages. They will be automatically pushed to releases at Github.cask-repair kube-forwarder
to update the cask version.([https://github.com/Homebrew/homebrew-cask/blob/master/CONTRIBUTING.md#updating-a-cask](About cask-repair))Notes:
v1.0.3
) will be added to GIT automatically by Github when you release your draft.Use tiffutil -cathidpicheck bg.png bg@2x.png -out bg.tiff
to build a tiffbackground for .DMG
This project is licensed under the MIT License - see the LICENSE.md file for details
多年间,Docker、Kubernetes 被视为云计算时代下开发者的左膀右臂。 Docker 作为一种开源的应用容器引擎,开发者可以打包他们的应用及依赖到一个可移植的容器中,发布到流行的 Linux 机器上,也可实现虚拟化。 Kubernetes,被称之为为 Docker 而生。同样作为开源容器集群管理系统,被用于管理云平台中多个主机上的容器化的应用。 不过,近日,Kubernetes 官方突然
一、预备环境 kubeedge是基于kubernetes运行的,所以首先需要搭建kubernetes集群作为云端部署。 准备条件 需要部署kubernetes主结点作为cloudcore部署的结点 不需要部署kube-flannel网络插件 不需要在边端部署kube-proxy 二、kubeedge安装 1.到官网下载keadm安装包 # 直接下载 $ wget https://github.
目录 KubeEdge 华为是 KubeEdge 的主要贡献者。该项目于 2018 年 9 月在 GitHub 上启动,并将其贡献给云原生计算基金会(CNCF),是 Kubernetes IOT Edge working group 的关键参考架构之一。 KubeEdge 在 Kubernetes 原生的容器编排和调度能力之上,实现了 云边协同、计算下沉、海量边缘设备管理、边缘自治 等能力。 Ku
DevOps-kubekey 部署 kubernetes 例如:学习k8s 其实很简单,这是第一章,从0开始 带大家进入 k8s环境 、可以选择从 阿里云购买 k8s集群,但是有的公司要求自己搭建,已就有了今天这个课程,kubekey 部署 k8s 生产环境准备 准备 3台服务器 8核(vCPU) 16 GiB https://kubesphere.com.cn/docs/installing-o
调度缓存用来保存node列表,及每个node上的所有pod信息,包括已经在node上运行的pod,假定pod和调度失败的pod信息,其必须实现下面的接口 type Cache interface { //测试用 NodeCount() int //测试用 PodCount() (int, error) //pod调度成功后,调用AssumePod将此pod占用资源增加到它所在n
今天遇到了一个问题,当我kubectl port-forward映射到本地的时候我control+z了,正常应该是control+c,找了不少资料 后来查了下control+z和control+c的区别得出结论 哈哈哈 opay1@OPay-20210101 1.15.2 % kubectl port-forward service/flink-feature-merchant-monitor-
Kube 足够的简单,足够小,具有很强的自适应能力,是个响应式的 CSS 框架。它拥有最新最炫的网格和漂亮的字体排版,没有任何样式绑定,给用户以绝对的自由。 支持的浏览器包括: Latest Chrome Latest Firefox Latest Safari Latest Opera IE 8+ 手机浏览器
Kube-OVN 将基于 OVN/OVS 的网络虚拟化方案带入 Kubernetes,提供了针对企业应用场景的高级容器网络编排功能。 主要功能: 基于Namespace的子网划分,以及网络控制 容器固定 IP IPv6支持 细粒度网络策略 动态 QoS 分布式和集中式网关 内嵌负载均衡器 支持集群内外网络直通 控制平面的灾备及高可用 丰富的监控和链路追踪工具 未来计划: 基于 XDP/DPDK/O
kube-eventer 是一个事件发射器,它将 Kubernetes 事件发送到接收器(例如,DingTalk、SLS、Kafka 等)。 监控是保障系统稳定性的重要组成部分,在 Kubernetes 开源生态中,资源类的监控工具与组件百花齐放,但是,只有资源类的监控是远远不够的,因为资源监控存在如下两个主要的缺欠: 监控的实时性与准确性不足 监控的场景覆盖范围不足 Kubernetes 的核心
kube-backup Quick 'n dirty kubernetes state backup script, designed to be ran as kubernetes Job. Think of it like RANCID for kubernetes. Props to @gianrubio for coming up with the idea. Setup Use the
kube-ps1: Kubernetes prompt for bash and zsh A script that lets you add the current Kubernetes context and namespaceconfigured on kubectl to your Bash/Zsh prompt strings (i.e. the $PS1). Inspired by s
�� Provision a Kubernetes / CoreOS Cluster on Linode Automatically provision a scalable CoreOS/Kubernetes cluster on Linode with zero configuration. The cluster will comprise of a single Kubernetes ma
kube-fzf Shell commands using kubectl and fzf for command-line fuzzy searching of Kubernetes Pods. It helps to interactively: search for a Pod tail a container of a Pod exec in to a container of a Pod
Kubernetes on AWS (kube-aws) Note: The master branch may be in an unstable or even broken state during development. Please use releases instead of the master branch in order to get stable binaries. ku