Deploying your Java application in a Kubernetes cluster could feel like Alice in Wonderland. You keep going down the rabbit hole and don’t know how to make that ride comfortable. This repository explains how a Java application can be deployed, tested, debugged and monitored in Kubernetes. In addition, it also talks about canary deployment and deployment pipeline.
A comprehensive hands-on course explaining these concepts is available at https://www.linkedin.com/learning/kubernetes-for-java-developers.
We will use a simple Java application built using Spring Boot. The application publishes a REST endpoint that can be invoked at http://{host}:{port}/hello
.
The source code is in the app
directory.
Run application:
cd app mvn spring-boot:run
Test application
curl http://localhost:8080/hello
Create m2.tar.gz
:
mvn -Dmaven.repo.local=./m2 clean package tar cvf m2.tar.gz ./m2
Create Docker image:
docker image build -t arungupta/greeting .
Explain multi-stage Dockerfile.
Create Docker image:
mvn compile jib:build -Pjib
The benefits of using Jib over a multi-stage Dockerfile build include:
Don’t need to install Docker or run a Docker daemon
Don’t need to write a Dockerfile or build the archive of m2 dependencies
Much faster
Builds reproducibly
The above builds directly to your Docker registry. Alternatively, Jib can also build to a Docker daemon:
mvn compile jib:dockerBuild -Pjib -Ddocker.name=arungupta/greeting
Run container:
docker container run --name greeting -p 8080:8080 -d arungupta/greeting
Access application:
curl http://localhost:8080/hello
Remove container:
docker container rm -f greeting
Download JDK 11 and scp
to an Amazon Linux instance
Install JDK 11:
sudo yum install jdk-11.0.1_linux-x64_bin.rpm
Create a custom JRE for the Spring Boot application:
cp target/app.war target/app.jar jlink \ --output myjre \ --add-modules $(jdeps --print-module-deps target/app.jar),\ java.xml,jdk.unsupported,java.sql,java.naming,java.desktop,\ java.management,java.security.jgss,java.instrument
Build Docker image using this custom JRE:
docker image build --file Dockerfile.jre -t arungupta/greeting:jre-slim .
List the Docker images and show the difference in sizes:
[ec2-user@ip-172-31-21-7 app]$ docker image ls | grep greeting arungupta/greeting jre-slim 9eed25582f36 6 seconds ago 162MB arungupta/greeting latest 1b7c061dad60 10 hours ago 490MB
Run the container:
docker container run -d -p 8080:8080 arungupta/greeting:jre-slim
Access the application:
curl http://localhost:8080/hello
A single node Kubernetes cluster can be easily created on a development machine using Minikube, MicroK8s, KIND, and Docker for Mac. Read on why using these local development environments does not truly represent your prod cluster.
This tutorial will use Docker for Mac.
Ensure that Kubernetes is enabled in Docker for Mac
Show the list of contexts:
kubectl config get-contexts
Configure kubectl CLI for Kubernetes cluster
kubectl config use-context docker-for-desktop
Install the Helm CLI:
brew install kubernetes-helm
If Helm CLI is already installed then use brew upgrade kubernetes-helm
.
Check Helm version:
helm version
Install Helm in Kubernetes cluster:
helm init
If Helm has already been initialized on the cluster, then you may have to upgrade Tiller:
helm init --upgrade
Install the Helm chart:
cd .. helm install --name myapp manifests/myapp
Check that the pod is running:
kubectl get pods
Check that the service is up:
kubectl get svc
Access the application:
curl http://$(kubectl get svc/myapp-greeting \ -o jsonpath='{.status.loadBalancer.ingress[0].hostname}'):8080/hello
You can debug a Docker container and a Kubernetes Pod if they’re running locally on your machine.
This was tested using Docker for Mac/Kubernetes. Use the previously deployed Helm chart.
Show service:
kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE greeting-service LoadBalancer 10.101.39.100 <pending> 80:30854/TCP 8m kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 90d myapp-greeting LoadBalancer 10.108.104.178 localhost 8080:32189/TCP,5005:31117/TCP 4s
Highlight the debug port is also forwarded.
In IntelliJ, Run
, Debug
, Remote
:
Click on Debug
, setup a breakpoint in the class:
Access the application:
curl http://$(kubectl get svc/myapp-greeting \ -o jsonpath='{.status.loadBalancer.ingress[0].hostname}'):8080/hello
Show the breakpoint hit in IntelliJ:
Delete the Helm chart:
helm delete --purge myapp
This was tested using Docker for Mac.
Run container:
docker container run --name greeting -p 8080:8080 -p 5005:5005 -d arungupta/greeting
Check container:
$ docker container ls -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 724313157e3c arungupta/greeting "java -jar app-swarm…" 3 seconds ago Up 2 seconds 0.0.0.0:5005->5005/tcp, 0.0.0.0:8080->8080/tcp greeting
Setup breakpoint as explained above.
Access the application using curl http://localhost:8080/resources/greeting
.
This application will be deployed to an Amazon EKS cluster. If you’re looking for a self-paced workshop that provide detailed instructions to get you started with EKS then eksworkshop.com is your place.
Let’s create the cluster first.
Install eksctl CLI:
brew install weaveworks/tap/eksctl
Create EKS cluster:
eksctl create cluster --name myeks --nodes 4 --region us-west-2 2018-10-25T13:45:38+02:00 [ℹ] setting availability zones to [us-west-2a us-west-2c us-west-2b] 2018-10-25T13:45:39+02:00 [ℹ] using "ami-0a54c984b9f908c81" for nodes 2018-10-25T13:45:39+02:00 [ℹ] creating EKS cluster "myeks" in "us-west-2" region 2018-10-25T13:45:39+02:00 [ℹ] will create 2 separate CloudFormation stacks for cluster itself and the initial nodegroup 2018-10-25T13:45:39+02:00 [ℹ] if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=us-west-2 --name=myeks' 2018-10-25T13:45:39+02:00 [ℹ] creating cluster stack "eksctl-myeks-cluster" 2018-10-25T13:57:33+02:00 [ℹ] creating nodegroup stack "eksctl-myeks-nodegroup-0" 2018-10-25T14:01:18+02:00 [✔] all EKS cluster resource for "myeks" had been created 2018-10-25T14:01:18+02:00 [✔] saved kubeconfig as "/Users/argu/.kube/config" 2018-10-25T14:01:19+02:00 [ℹ] the cluster has 0 nodes 2018-10-25T14:01:19+02:00 [ℹ] waiting for at least 4 nodes to become ready 2018-10-25T14:01:50+02:00 [ℹ] the cluster has 4 nodes 2018-10-25T14:01:50+02:00 [ℹ] node "ip-192-168-161-180.us-west-2.compute.internal" is ready 2018-10-25T14:01:50+02:00 [ℹ] node "ip-192-168-214-48.us-west-2.compute.internal" is ready 2018-10-25T14:01:50+02:00 [ℹ] node "ip-192-168-75-44.us-west-2.compute.internal" is ready 2018-10-25T14:01:50+02:00 [ℹ] node "ip-192-168-82-236.us-west-2.compute.internal" is ready 2018-10-25T14:01:52+02:00 [ℹ] kubectl command should work with "/Users/argu/.kube/config", try 'kubectl get nodes' 2018-10-25T14:01:52+02:00 [✔] EKS cluster "myeks" in "us-west-2" region is ready
Check the nodes:
kubectl get nodes NAME STATUS ROLES AGE VERSION ip-192-168-161-180.us-west-2.compute.internal Ready <none> 52s v1.10.3 ip-192-168-214-48.us-west-2.compute.internal Ready <none> 57s v1.10.3 ip-192-168-75-44.us-west-2.compute.internal Ready <none> 57s v1.10.3 ip-192-168-82-236.us-west-2.compute.internal Ready <none> 54s v1.10.3
Get the list of configs:
kubectl config get-contexts CURRENT NAME CLUSTER AUTHINFO NAMESPACE * arun@myeks.us-west-2.eksctl.io myeks.us-west-2.eksctl.io arun@myeks.us-west-2.eksctl.io docker-for-desktop docker-for-desktop-cluster docker-for-desktop
As indicated by *
, kubectl CLI configuration is updated to the recently created cluster.
Explicitly set the context:
kubectl config use-context arun@myeks.us-west-2.eksctl.io
Install Helm:
kubectl -n kube-system create sa tiller kubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount=kube-system:tiller helm init --service-account tiller
Check the list of pods:
kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE aws-node-774jf 1/1 Running 1 2m aws-node-jrf5r 1/1 Running 0 2m aws-node-n46tw 1/1 Running 0 2m aws-node-slgns 1/1 Running 0 2m kube-dns-7cc87d595-5tskv 3/3 Running 0 8m kube-proxy-2ghg6 1/1 Running 0 2m kube-proxy-hqxwg 1/1 Running 0 2m kube-proxy-lrwrr 1/1 Running 0 2m kube-proxy-x77tq 1/1 Running 0 2m tiller-deploy-895d57dd9-txqk4 1/1 Running 0 15s
Redeploy the application:
helm install --name myapp manifests/myapp
Get the service:
kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 17m myapp-greeting LoadBalancer 10.100.241.250 a8713338abef211e8970816cb629d414-71232674.us-east-1.elb.amazonaws.com 8080:32626/TCP,5005:30739/TCP 2m
It shows the port 8080
and 5005
are published and an Elastic Load Balancer is provisioned. It takes about three minutes for the load balancer to be ready.
Access the application:
curl http://$(kubectl get svc/myapp-greeting \ -o jsonpath='{.status.loadBalancer.ingress[0].hostname}'):8080/hello
Delete the application:
helm delete --purge myapp
AWS App Mesh is a service mesh that provides application-level networking to make it easy for your services to communicate with each other across multiple types of compute infrastructure. App Mesh can be used with Amazon EKS or Kubernetes running on AWS. In addition, it also works with other container services offered by AWS such as AWS Fargate and Amazon ECS. It also works with microservices deployed on Amazon EC2.
A thorough detailed example that shows how to use App Mesh with EKS is available at Service Mesh with App Mesh. This section provides a simplistic setup using the configuration files from there.
All scripts used in this section are in the manifests/appmesh
directory.
Set a variable ROLE_NAME
to IAM role for the EKS worker nodes:
ROLE_NAME=$(aws iam list-roles \ --query \ 'Roles[?contains(RoleName,`eksctl-myeks-nodegroup`)].RoleName' --output text)
Setup permissions for the worker nodes:
aws iam attach-role-policy \ --role-name $ROLE_NAME \ --policy-arn arn:aws:iam::aws:policy/AWSAppMeshFullAccess
Enable side-car injection by running create.sh
script from https://github.com/aws/aws-app-mesh-examples/tree/master/examples/apps/djapp/2_create_injector. You need to change ca-bundle.sh
and change MESH_NAME
to greeting-app
.
Create prod
namespace:
kubectl create namespace prod
Label prod namespace:
kubectl label namespace prod appmesh.k8s.aws/sidecarInjectorWebhook=enabled
Create CRDs:
kubectl create -f https://raw.githubusercontent.com/aws/aws-app-mesh-examples/master/examples/apps/djapp/3_add_crds/mesh-definition.yaml kubectl create -f https://raw.githubusercontent.com/aws/aws-app-mesh-examples/master/examples/apps/djapp/3_add_crds/virtual-node-definition.yaml kubectl create -f https://raw.githubusercontent.com/aws/aws-app-mesh-examples/master/examples/apps/djapp/3_add_crds/virtual-service-definition.yaml kubectl create -f https://raw.githubusercontent.com/aws/aws-app-mesh-examples/master/examples/apps/djapp/3_add_crds/controller-deployment.yaml
Create a Mesh:
kubectl create -f mesh.yaml
Create Virtual Nodes:
kubectl create -f virtualnodes.yaml
Create a Virtual Services:
kubectl create -f virtualservice.yaml
Create deployments:
kubectl create -f app-hello-howdy.yaml
Create services:
kubectl create -f services.yaml
Find the name of the talker pod:
TALKER_POD=$(kubectl get pods \ -nprod -lgreeting=talker \ -o jsonpath='{.items[0].metadata.name}')
Exec into the talker pod:
kubectl exec -nprod $TALKER_POD -it bash
Invoke the mostly-hello service to get back mostly Hello
response:
while [ 1 ]; do curl http://mostly-hello.prod.svc.cluster.local:8080/hello; echo;done
CTRL
+C
to break the loop.
Invoke the mostly-howdy service to get back mostly Howdy
response:
while [ 1 ]; do curl http://mostly-howdy.prod.svc.cluster.local:8080/hello; echo;done
CTRL
+C
to break the loop.
Istio is is a layer 4/7 proxy that routes and load balances traffic over HTTP, WebSocket, HTTP/2, gRPC and supports application protocols such as MongoDB and Redis. Istio uses the Envoy proxy to manage all inbound/outbound traffic in the service mesh.
Istio has a wide variety of traffic management features that live outside the application code, such as A/B testing, phased/canary rollouts, failure recovery, circuit breaker, layer 7 routing and policy enforcement (all provided by the Envoy proxy). Istio also supports ACLs, rate limits, quotas, authentication, request tracing and telemetry collection using its Mixer component. The goal of the Istio project is to support traffic management and security of microservices without requiring any changes to the application; it does this by injecting a sidecar into your pod that handles all network communications.
More details at Getting Started with Istio on Amazon EKS.
Download Istio:
curl -L https://git.io/getLatestIstio | sh - cd istio-1.*
Include istio-1.*/bin
directory in PATH
Install Istio on Amazon EKS:
helm install \ --wait \ --name istio \ --namespace istio-system \ install/kubernetes/helm/istio \ --set tracing.enabled=true \ --set grafana.enabled=true
Verify:
kubectl get pods -n istio-system NAME READY STATUS RESTARTS AGE grafana-75485f89b9-4lwg5 1/1 Running 0 1m istio-citadel-84fb7985bf-4dkcx 1/1 Running 0 1m istio-egressgateway-bd9fb967d-bsrhz 1/1 Running 0 1m istio-galley-655c4f9ccd-qwk42 1/1 Running 0 1m istio-ingressgateway-688865c5f7-zj9db 1/1 Running 0 1m istio-pilot-6cd69dc444-9qstf 2/2 Running 0 1m istio-policy-6b9f4697d-g8hc6 2/2 Running 0 1m istio-sidecar-injector-8975849b4-cnd6l 1/1 Running 0 1m istio-statsd-prom-bridge-7f44bb5ddb-8r2zx 1/1 Running 0 1m istio-telemetry-6b5579595f-nlst8 2/2 Running 0 1m istio-tracing-ff94688bb-2w4wg 1/1 Running 0 1m prometheus-84bd4b9796-t9kk5 1/1 Running 0 1m
Check that both Tracing and Grafana add-ons are enabled.
Enable side car injection for all pods in default
namespace
kubectl label namespace default istio-injection=enabled
From the repo’s main directory, deploy the application:
kubectl apply -f manifests/app.yaml
Check pods and note that it has two containers (one for the application and one for the sidecar):
kubectl get pods -l app=greeting NAME READY STATUS RESTARTS AGE greeting-d4f55c7ff-6gz8b 2/2 Running 0 5s
Get list of containers in the pod:
kubectl get pods -l app=greeting -o jsonpath={.items[*].spec.containers[*].name} greeting istio-proxy
Get response:
curl http://$(kubectl get svc/greeting \ -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')/hello
Deploy application with two versions of greeting
, one that returns Hello
and another that returns Howdy
:
kubectl delete -f manifests/app.yaml kubectl apply -f manifests/app-hello-howdy.yaml
Check the list of pods:
kubectl get pods -l app=greeting NAME READY STATUS RESTARTS AGE greeting-hello-69cc7684d-7g4bx 2/2 Running 0 1m greeting-howdy-788b5d4b44-g7pml 2/2 Running 0 1m
Access application multipe times to see different response:
for i in {1..10} do curl -q http://$(kubectl get svc/greeting -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')/hello echo done
Setup an Istio rule to split traffic between 75% to Hello
and 25% to Howdy
version of the greeting
service:
kubectl apply -f manifests/istio/app-rule-75-25.yaml
Invoke the service again to see the traffic split between two services.
Setup an Istio rule to divert 10% traffic to canary:
kubectl delete -f manifests/istio/app-rule-75-25.yaml kubectl apply -f manifests/istio/app-canary.yaml
Access application multipe times to see ~10% greeting messages with Howdy
:
for i in {1..50} do curl -q http://$(kubectl get svc/greeting -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')/hello echo done
Istio is deployed as a sidecar proxy into each of your pods; this means it can see and monitor all the traffic flows between your microservices and generate a graphical representation of your mesh traffic. We’ll use the application you deployed in the previous step to demonstrate this.
By default, tracing is disabled. --set tracing.enabled=true
was used during Istio installation to ensure tracing was enabled.
Setup access to the tracing dashboard URL using port-forwarding:
kubectl port-forward \ -n istio-system \ pod/$(kubectl get pod \ -n istio-system \ -l app=jaeger \ -o jsonpath='{.items[0].metadata.name}') 16686:16686 &
Access the dashboard at http://localhost:16686, click on Dependencies
, DAG
.
By default, Grafana is disabled. --set grafana.enabled=true
was used during Istio installation to ensure Grafana was enabled. Alternatively, the Grafana add-on can be installed as:
kubectl apply -f install/kubernetes/addons/grafana.yaml
Verify:
kubectl get pods -l app=grafana -n istio-system NAME READY STATUS RESTARTS AGE grafana-75485f89b9-n4skw 1/1 Running 0 10m
Forward Istio dashboard using Grafana UI:
kubectl -n istio-system \ port-forward $(kubectl -n istio-system \ get pod -l app=grafana \ -o jsonpath='{.items[0].metadata.name}') 3000:3000 &
View Istio dashboard http://localhost:3000. Click on Home
, Istio Workload Dashboard
.
Invoke the endpoint:
curl http://$(kubectl get svc/greeting \ -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')/hello
Delays and timeouts can be injected in services.
Deploy the application:
kubectl delete -f manifests/app.yaml kubectl apply -f manifests/app-ingress.yaml
Add a 5 seconds delay to calls to the service:
kubectl apply -f manifests/istio/greeting-delay.yaml
Invoke the service using a 2 seconds timeout:
export INGRESS_HOST=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].hostname}') export INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="http")].port}') export GATEWAY_URL=$INGRESS_HOST:$INGRESS_PORT curl --connect-timeout 2 http://$GATEWAY_URL/resources/greeting
The service will timeout in 2 seconds.
kube-monkey is an implementation of Netflix’s Chaos Monkey for Kubernetes clusters. It randomly deletes Kubernetes pods in the cluster encouraging and validating the development of failure-resilient services.
Create kube-monkey configuration:
kubectl apply -f manifests/kubemonkey/kube-monkey-configmap.yaml
Run kube-monkey:
kubectl apply -f manifests/kubemonkey/kube-monkey-deployment.yaml
Deploy an app that opts-in for pod deletion:
kubectl apply -f manifests/kubemonkey/app-kube-monkey.yaml
This application agrees to kill up to 40% of pods. The schedule of deletion is defined by kube-monkey configuration and is defined to be between 10am and 4pm on weekdays.
Skaffold is a command line utility that facilitates continuous development for Kubernetes applications. With Skaffold, you can iterate on your application source code locally then deploy it to a remote Kubernetes cluster.
Check context:
kubectl config get-contexts CURRENT NAME CLUSTER AUTHINFO NAMESPACE arun@eks-gpu.us-west-2.eksctl.io eks-gpu.us-west-2.eksctl.io arun@eks-gpu.us-west-2.eksctl.io * arun@myeks.us-east-1.eksctl.io myeks.us-east-1.eksctl.io arun@myeks.us-east-1.eksctl.io docker-for-desktop docker-for-desktop-cluster docker-for-desktop
Change to use local Kubernetes cluster:
kubectl config use-context docker-for-desktop
Download Skaffold:
curl -Lo skaffold https://storage.googleapis.com/skaffold/releases/latest/skaffold-darwin-amd64 \ && chmod +x skaffold
Open http://localhost:8080/resources/greeting in browser. This will show the page is not available.
Run Skaffold in the application directory:
cd app skaffold dev
Refresh the page in browser to see the output.
Complete detailed instructions are available at https://eksworkshop.com/codepipeline/.
Create an IAM role and add an in-line policy that will allow the CodeBuild stage to interact with the EKS cluster:
ACCOUNT_ID=`aws sts get-caller-identity --query Account --output text` TRUST="{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Principal\": { \"AWS\": \"arn:aws:iam::${ACCOUNT_ID}:root\" }, \"Action\": \"sts:AssumeRole\" } ] }" echo '{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "eks:Describe*", "Resource": "*" } ] }' > /tmp/iam-role-policy aws iam create-role --role-name EksWorkshopCodeBuildKubectlRole --assume-role-policy-document "$TRUST" --output text --query 'Role.Arn' aws iam put-role-policy --role-name EksWorkshopCodeBuildKubectlRole --policy-name eks-describe --policy-document file:///tmp/iam-role-policy
Add this IAM role to aws-auth ConfigMap for the EKS cluster:
ROLE=" - rolearn: arn:aws:iam::$ACCOUNT_ID:role/EksWorkshopCodeBuildKubectlRole\n username: build\n groups:\n - system:masters" kubectl get -n kube-system configmap/aws-auth -o yaml | awk "/mapRoles: \|/{print;print \"$ROLE\";next}1" > /tmp/aws-auth-patch.yml kubectl patch configmap/aws-auth -n kube-system --patch "$(cat /tmp/aws-auth-patch.yml)"
Fork the repo https://github.com/aws-samples/kubernetes-for-java-developers
Create a new GitHub token https://github.com/settings/tokens/new, select repo
as the scope, click on Generate Token
to generate the token. Copy the generated token.
Specify the correct values for GitHubUser
, GitHubToken
, GitSourceRepo
and EKS cluster name
. Change the branch if you need to:
Click on Create stack
to create the resources.
Once the stack creation is complete, open CodePipeline in the AWS Console.
Select the pipeline and wait for the pipeline status to complete:
Access the service:
curl http://$(kubectl get svc/greeting -n default \ -o jsonpath='{.status.loadBalancer.ingress[0].hostname}'):8080/hello
Install jx
CLI:
brew tap jenkins-x/jx brew install jx
Create a new GitHub token with the following scope:
Install Jenkins X on Amazon EKS:
jx install --provider=eks --git-username arun-gupta --git-api-token GITHUB_TOKEN --batch-mode
Log shows complete run of the command.
Use jx import
to import a project. Need Dockerfile
and maven application in the root directory.
Canary deployment 有时候,我们需要多个标签来区分相同微服务的不同版本。对于上线新版微服务,需要经过线上测试(灰度发布)。一般做法是通过service将一部分线上流量调度到新版本的pod,确定运行无误后,在对所有的pod进行rolling update。 首先创建2两个Deployment,主体内容如下: #Deployment 1 name: apigw replicas: 1
Group ID Artifact ID Latest Version Updated OSS Index Download io.kubernetes client-java-examples-parent 15.0.0 (14) 28-Mar-2022 open_in_new file_download io.kubernetes
0,前言 之前说的,使用k8s client 在pod内部操作外部k8s集群,curd configmap,但是在这里遇到一个问题,就是k8s client需要提供/root/.kube/config文件作为参数,如何提供这个配置文件呢? 1,想到的解决方案 1,代码写死。 缺点:(1)当node为master节点时,master:127.0.0.1:6443,需要把这个本地地址改为ip v4地址
Kubernete JAVA client offical link as below: https://github.com/kubernetes-client/java 按照README,先build java client,记得设置proxy,不然dependency下载会失败 git clone --recursive https://github.com/kubernetes-clien
是否可以通过从yaml文件中读取数据并使用Java客户机为Kubernetes创建自定义的资源定义?我使用的是sbt和Scala中的库3.0.0版本。但是我无法在主库存储库中找到任何用于自定义资源创建的方法,而对于基本资源(如pods)也有类似的方法
Kubernetes the AWSome Way! This content is outdated and is no longer maintained. Please go to https://www.eksworkshop.com/ for newest EKS tutorials! This is a self-paced workshop designed for Developm
vGPU device plugin基于NVIDIA官方插件(NVIDIA/k8s-device-plugin),在保留官方功能的基础上,实现了对物理GPU进行切分,并对显存和计算单元进行限制,从而模拟出多张小的vGPU卡。在k8s集群中,基于这些切分后的vGPU进行调度,使不同的容器可以安全的共享同一张物理GPU,提高GPU的利用率。此外,插件还可以对显存做虚拟化处理(使用到的显存可以超过物理上
问题内容: 我是Capistrano的忠实拥护者,但我需要为仅Java商店开发自动部署脚本。我看过Ant和Maven,它们似乎不像Capistrano那样适合远程管理- 它们似乎更专注于简单地构建和打包应用程序。有没有更好的工具? 问题答案: 我不认为Java Web应用程序有类似Capistrano的应用程序,但这并不能阻止您使用它(或Fabric之类的替代方案)来部署应用程序。就像您已经说过的
这是 GeoFire 的 Java 客户端开发包,支持 Android。 GeoFire 是一个开源的用来存储和查询定位信息的库,简化了使用字符串键来存储位置信息。这些查询是实时的。GeoFire 使用 Firebase 数据库,更新也是实时的。 同时提供了 Objective-C 和 JavaScript 的兼容客户端。
Aspose.Slides for Java 是一个 Java 组件,用来操作 PowerPoint 幻灯片文档,无需安装 PowerPoint 环境,可导出到 PDF、嵌入音频和视频链接,生成缩略图,抽取文本信息等功能。