当前位置: 首页 > 软件库 > 云计算 > 云原生 >

continuous-deployment-on-kubernetes

授权协议 Apache-2.0 License
开发语言 Google Go
所属分类 云计算、 云原生
软件类型 开源软件
地区 不详
投 递 者 鲜于俊侠
操作系统 跨平台
开源组织
适用人群 未知
 软件概览

Lab: Build a Continuous Deployment Pipeline with Jenkins and Kubernetes

For a more in depth best practices guide, go to the solution posted here.

Introduction

This guide will take you through the steps necessary to continuously deliveryour software to end users by leveraging Google Container Engineand Jenkins to orchestrate the software delivery pipeline.If you are not familiar with basic Kubernetes concepts, have a look atKubernetes 101.

In order to accomplish this goal you will use the following Jenkins plugins:

  • Jenkins Kubernetes Plugin - start Jenkins build executor containers in the Kubernetes cluster when builds are requested, terminate those containers when builds complete, freeing resources up for the rest of the cluster
  • Jenkins Pipelines - define our build pipeline declaratively and keep it checked into source code management alongside our application code
  • Google Oauth Plugin - allows you to add your google oauth credentials to jenkins

In order to deploy the application with Kubernetes you will use the following resources:

  • Deployments - replicates our application across our kubernetes nodes and allows us to do a controlled rolling update of our software across the fleet of application instances
  • Services - load balancing and service discovery for our internal services
  • Ingress - external load balancing and SSL termination for our external service
  • Secrets - secure storage of non public configuration information, SSL certs specifically in our case

Prerequisites

  1. A Google Cloud Platform Account
  2. Enable the Compute Engine, Container Engine, and Container Builder APIs

Do this first

In this section you will start your Google Cloud Shelland clone the lab code repository to it.

  1. Create a new Google Cloud Platform project: https://console.developers.google.com/project

  2. Click the Activate Cloud Shell icon in the top-right and wait for your shell to open.

    If you are prompted with a Learn more message, click Continue tofinish opening the Cloud Shell.

  3. When the shell is open, use the gcloudcommand line interface tool to set your default compute zone:

    gcloud config set compute/zone us-east1-d

    Output (do not copy):

    Updated property [compute/zone].
    
  4. Set an environment variable with your project:

    export GOOGLE_CLOUD_PROJECT=$(gcloud config get-value project)

    Output (do not copy):

    Your active configuration is: [cloudshell-...]
    
  5. Clone the lab repository in your cloud shell, then cd into that dir:

    git clone https://github.com/GoogleCloudPlatform/continuous-deployment-on-kubernetes.git

    Output (do not copy):

    Cloning into 'continuous-deployment-on-kubernetes'...
    ...
    
    cd continuous-deployment-on-kubernetes

Create a Service Account with permissions

  1. Create a service account, on Google Cloud Platform (GCP).

    Create a new service account because it's the recommended way to avoidusing extra permissions in Jenkins and the cluster.

    gcloud iam service-accounts create jenkins-sa \
        --display-name "jenkins-sa"

    Output (do not copy):

    Created service account [jenkins-sa].
    
  2. Add required permissions, to the service account, using predefined roles.

    Most of these permissions are related to Jenkins use of Cloud Build, andstoring/retrieving build artifacts in Cloud Storage. Also, theservice account needs to enable the Jenkins agent to read from a repoyou will create in Cloud Source Repositories (CSR).

    gcloud projects add-iam-policy-binding $GOOGLE_CLOUD_PROJECT \
        --member "serviceAccount:jenkins-sa@$GOOGLE_CLOUD_PROJECT.iam.gserviceaccount.com" \
        --role "roles/viewer"
    
    gcloud projects add-iam-policy-binding $GOOGLE_CLOUD_PROJECT \
        --member "serviceAccount:jenkins-sa@$GOOGLE_CLOUD_PROJECT.iam.gserviceaccount.com" \
        --role "roles/source.reader"
    
    gcloud projects add-iam-policy-binding $GOOGLE_CLOUD_PROJECT \
        --member "serviceAccount:jenkins-sa@$GOOGLE_CLOUD_PROJECT.iam.gserviceaccount.com" \
        --role "roles/storage.admin"
    
    gcloud projects add-iam-policy-binding $GOOGLE_CLOUD_PROJECT \
        --member "serviceAccount:jenkins-sa@$GOOGLE_CLOUD_PROJECT.iam.gserviceaccount.com" \
        --role "roles/storage.objectAdmin"
    
    gcloud projects add-iam-policy-binding $GOOGLE_CLOUD_PROJECT \
        --member "serviceAccount:jenkins-sa@$GOOGLE_CLOUD_PROJECT.iam.gserviceaccount.com" \
        --role "roles/cloudbuild.builds.editor"
    
    gcloud projects add-iam-policy-binding $GOOGLE_CLOUD_PROJECT \
        --member "serviceAccount:jenkins-sa@$GOOGLE_CLOUD_PROJECT.iam.gserviceaccount.com" \
        --role "roles/container.developer"

    You can check the permissions added using IAM & admin in Cloud Console.

  3. Export the service account credentials to a JSON key file in Cloud Shell:

    gcloud iam service-accounts keys create ~/jenkins-sa-key.json \
        --iam-account "jenkins-sa@$GOOGLE_CLOUD_PROJECT.iam.gserviceaccount.com"

    Output (do not copy):

    created key [...] of type [json] as [/home/.../jenkins-sa-key.json] for [jenkins-sa@myproject.aiam.gserviceaccount.com]
    
  4. Download the JSON key file to your local machine.

    Click Download File from More on the Cloud Shell toolbar:

  5. Enter the File path as jenkins-sa-key.json and click Download.

    The file will be downloaded to your local machine, for use later.

Create a Kubernetes Cluster

  1. Provision the cluster with gcloud:

    Use Google Kubernetes Engine (GKE) to create and manage your Kubernetescluster, named jenkins-cd. Use the service account created earlier.

    gcloud container clusters create jenkins-cd \
      --num-nodes 2 \
      --machine-type n1-standard-2 \
      --cluster-version 1.15 \
      --service-account "jenkins-sa@$GOOGLE_CLOUD_PROJECT.iam.gserviceaccount.com"

    Output (do not copy):

    NAME        LOCATION    MASTER_VERSION  MASTER_IP     MACHINE_TYPE  NODE_VERSION   NUM_NODES  STATUS
    jenkins-cd  us-east1-d  1.15.11-gke.15   35.229.29.69  n1-standard-2 1.15.11-gke.15  2          RUNNING
    
  2. Once that operation completes, retrieve the credentials for your cluster.

    gcloud container clusters get-credentials jenkins-cd

    Output (do not copy):

    Fetching cluster endpoint and auth data.
    kubeconfig entry generated for jenkins-cd.
    
  3. Confirm that the cluster is running and kubectl is working by listing pods:

    kubectl get pods

    Output (do not copy):

    No resources found.
    

    You would see an error if the cluster was not created, or you did nothave permissions.

  4. Add yourself as a cluster administrator in the cluster's RBAC so that you cangive Jenkins permissions in the cluster:

    kubectl create clusterrolebinding cluster-admin-binding --clusterrole=cluster-admin --user=$(gcloud config get-value account)

    Output (do not copy):

    Your active configuration is: [cloudshell-...]
    clusterrolebinding.rbac.authorization.k8s.io/cluster-admin-binding created
    

Install Helm

In this lab, you will use Helm to install Jenkins with a stable chart. Helmis a package manager that makes it easy to configure and deploy Kubernetesapplications. Once you have Jenkins installed, you'll be able to set up yourCI/CD pipleline.

  1. Download and install the helm binary

    wget https://get.helm.sh/helm-v3.2.1-linux-amd64.tar.gz
  2. Unzip the file to your local system:

    tar zxfv helm-v3.2.1-linux-amd64.tar.gz
    cp linux-amd64/helm .
  3. Add the official stable repository.

    ./helm repo add stable https://kubernetes-charts.storage.googleapis.com
  4. Ensure Helm is properly installed by running the following command. Youshould see version v3.2.1 appear:

    ./helm version

    Output (do not copy):

    version.BuildInfo{Version:"v3.2.1", GitCommit:"fe51cd1e31e6a202cba7dead9552a6d418ded79a", GitTreeState:"clean", GoVersion:"go1.13.10"}
    

Configure and Install Jenkins

You will use a custom values fileto add the GCP specific plugin necessary to use service account credentials to reach your Cloud Source Repository.

  1. Use the Helm CLI to deploy the chart with your configuration set.

    ./helm install cd-jenkins -f jenkins/values.yaml stable/jenkins --version 1.7.3 --wait

    Output (do not copy):

    ...
    For more information on running Jenkins on Kubernetes, visit:
    https://cloud.google.com/solutions/jenkins-on-container-engine
    
  2. The Jenkins pod STATUS should change to Running when it's ready:

    kubectl get pods

    Output (do not copy):

    NAME                          READY     STATUS    RESTARTS   AGE
    cd-jenkins-7c786475dd-vbhg4   1/1       Running   0          1m
    
  3. Configure the Jenkins service account to be able to deploy to the cluster.

    kubectl create clusterrolebinding jenkins-deploy --clusterrole=cluster-admin --serviceaccount=default:cd-jenkins

    Output (do not copy):

    clusterrolebinding.rbac.authorization.k8s.io/jenkins-deploy created
    
  4. Set up port forwarding to the Jenkins UI, from Cloud Shell:

    export JENKINS_POD_NAME=$(kubectl get pods -l "app.kubernetes.io/component=jenkins-master" -o jsonpath="{.items[0].metadata.name}")
    kubectl port-forward $JENKINS_POD_NAME 8080:8080 >> /dev/null &
  5. Now, check that the Jenkins Service was created properly:

    kubectl get svc

    Output (do not copy):

    NAME               CLUSTER-IP     EXTERNAL-IP   PORT(S)     AGE
    cd-jenkins         10.35.249.67   <none>        8080/TCP    3h
    cd-jenkins-agent   10.35.248.1    <none>        50000/TCP   3h
    kubernetes         10.35.240.1    <none>        443/TCP     9h
    

    This Jenkins configuration is using the Kubernetes Plugin,so that builder nodes will be automatically launched as necessary when theJenkins master requests them. Upon completion of the work, the builder nodeswill be automatically turned down, and their resources added back to thecluster's resource pool.

    Notice that this service exposes ports 8080 and 50000 for any pods thatmatch the selector. This will expose the Jenkins web UI and builder/agentregistration ports within the Kubernetes cluster. Additionally the jenkins-uiservices is exposed using a ClusterIP so that it is not accessible from outsidethe cluster.

Connect to Jenkins

  1. The Jenkins chart will automatically create an admin password for you. Toretrieve it, run:

    printf $(kubectl get secret cd-jenkins -o jsonpath="{.data.jenkins-admin-password}" | base64 --decode);echo
  2. To get to the Jenkins user interface, click on the Web Previewbutton in cloud shell, then clickPreview on port 8080:

You should now be able to log in with username admin and your auto generatedpassword.

Your progress, and what's next

You've got a Kubernetes cluster managed by GKE. You've deployed:

  • a Jenkins Deployment
  • a (non-public) service that exposes Jenkins to its agent containers

You have the tools to build a continuous deployment pipeline. Now you need asample app to deploy continuously.

The sample app

You'll use a very simple sample application - gceme - as the basis for your CDpipeline. gceme is written in Go and is located in the sample-app directoryin this repo. When you run the gceme binary on a GCE instance, it displays theinstance's metadata in a pretty card:

The binary supports two modes of operation, designed to mimic a microservice. Inbackend mode, gceme will listen on a port (8080 by default) and return GCEinstance metadata as JSON, with content-type=application/json. In frontend mode,gceme will query a backend gceme service and render that JSON in the UI yousaw above. It looks roughly like this:

-----------      ------------      ~~~~~~~~~~~~        -----------
|         |      |          |      |          |        |         |
|  user   | ---> |   gceme  | ---> | lb/proxy | -----> |  gceme  |
|(browser)|      |(frontend)|      |(optional)|   |    |(backend)|
|         |      |          |      |          |   |    |         |
-----------      ------------      ~~~~~~~~~~~~   |    -----------
                                                  |    -----------
                                                  |    |         |
                                                  |--> |  gceme  |
                                                       |(backend)|
                                                       |         |
                                                       -----------

Both the frontend and backend modes of the application support two additional URLs:

  1. /version prints the version of the binary (declared as a const inmain.go)
  2. /healthz reports the health of the application. In frontend mode, healthwill be OK if the backend is reachable.

Deploy the sample app to Kubernetes

In this section you will deploy the gceme frontend and backend to Kubernetesusing Kubernetes manifest files (included in this repo) that describe theenvironment that the gceme binary/Docker image will be deployed to. They use adefault gceme Docker image that you will be updating with your own in a latersection.

You'll have two primary environments -canary and production - anduse Kubernetes to manage them.

Note: The manifest files for this section of the tutorial are insample-app/k8s. You are encouraged to open and read each one before creatingit per the instructions.

  1. First change directories to the sample-app, back in Cloud Shell:

    cd sample-app
  2. Create the namespace for production:

    kubectl create ns production

    Output (do not copy):

    namespace/production created
    
  3. Create the production Deployments for frontend and backend:

    kubectl --namespace=production apply -f k8s/production

    Output (do not copy):

    deployment.extensions/gceme-backend-production created
    deployment.extensions/gceme-frontend-production created
    
  4. Create the canary Deployments for frontend and backend:

    kubectl --namespace=production apply -f k8s/canary

    Output (do not copy):

    deployment.extensions/gceme-backend-canary created
    deployment.extensions/gceme-frontend-canary created
    
  5. Create the Services for frontend and backend:

    kubectl --namespace=production apply -f k8s/services

    Output (do not copy):

    service/gceme-backend created
    service/gceme-frontend created
    
  6. Scale the production, frontend service:

    kubectl --namespace=production scale deployment gceme-frontend-production --replicas=4

    Output (do not copy):

    deployment.extensions/gceme-frontend-production scaled
    
  7. Retrieve the External IP for the production services:

    This field may take a few minutes to appear as the load balancer is beingprovisioned

    kubectl --namespace=production get service gceme-frontend

    Output (do not copy):

    NAME             TYPE           CLUSTER-IP     EXTERNAL-IP    PORT(S)        AGE
    gceme-frontend   LoadBalancer   10.35.254.91   35.196.48.78   80:31088/TCP   1m
    
  8. Confirm that both services are working by opening the frontend EXTERNAL-IPin your browser

  9. Poll the production endpoint's /version URL.

    Open a new Cloud Shell terminal by clicking the + button to the rightof the current terminal's tab.

    export FRONTEND_SERVICE_IP=$(kubectl get -o jsonpath="{.status.loadBalancer.ingress[0].ip}"  --namespace=production services gceme-frontend)
    while true; do curl http://$FRONTEND_SERVICE_IP/version; sleep 3;  done

    Output (do not copy):

    1.0.0
    1.0.0
    1.0.0
    

    You should see that all requests are serviced by v1.0.0 of the application.

    Leave this running in the second terminal so you can easily observe rollingupdates in the next section.

  10. Return to the first terminal/tab in Cloud Shell.

Create a repository for the sample app source

Here you'll create your own copy of the gceme sample app inCloud Source Repository.

  1. Initialize the git repository.

    Make sure to work from the sample-app directory of the repo you cloned previously.

    git init
    git config credential.helper gcloud.sh
    gcloud source repos create gceme
  2. Add a git remote for the new repo in Cloud Source Repositories.

    git remote add origin https://source.developers.google.com/p/$GOOGLE_CLOUD_PROJECT/r/gceme
  3. Ensure git is able to identify you:

    git config --global user.email "YOUR-EMAIL-ADDRESS"
    git config --global user.name "YOUR-NAME"
  4. Add, commit, and push all the files:

    git add .
    git commit -m "Initial commit"
    git push origin master

    Output (do not copy):

    To https://source.developers.google.com/p/myproject/r/gceme
     * [new branch]      master -> master
    

Create a pipeline

You'll now use Jenkins to define and run a pipeline that will test, build,and deploy your copy of gceme to your Kubernetes cluster. You'll approach thisin phases. Let's get started with the first.

Phase 1: Add your service account credentials

First, you will need to configure GCP credentials in order for Jenkins to beable to access the code repository:

  1. In the Jenkins UI, Click Credentials on the left

  2. Click the (global) link

  3. Click Add Credentials on the left

  4. From the Kind dropdown, select Google Service Account from private key

  5. Enter the Project Name from your project

  6. Leave JSON key selected, and click Choose File.

  7. Select the jenkins-sa-key.json file downloaded earlier, then clickOpen.

  8. Click OK

You should now see 1 global credential. Make a note of the name of thecredential, as you will reference this in Phase 2.

Phase 2: Create a job

This lab uses Jenkins Pipeline todefine builds as groovy scripts.

Navigate to your Jenkins UI and follow these steps to configure a Pipeline job(hot tip: you can find the IP address of your Jenkins install with kubectl get ingress --namespace jenkins):

  1. Click the Jenkins link in the top left toolbar, of the ui

  2. Click the New Item link in the left nav

  3. For item name use sample-app, choose the Multibranch Pipelineoption, then click OK

  4. Click Add source and choose git

  5. Paste the HTTPS clone URL of your gceme repo on Cloud SourceRepositories into the Project Repository field.It will look like:https://source.developers.google.com/p/[REPLACE_WITH_YOUR_PROJECT_ID]/r/gceme

  6. From the Credentials dropdown, select the name of the credential fromPhase 1. It should have the format PROJECT_ID service account.

  7. Under Scan Multibranch Pipeline Triggers section, check thePeriodically if not otherwise run box, then set the Interval value to1 minute.

  8. Click Save, leaving all other options with default values.

    A Branch indexing job was kicked off to identify any branches in yourrepository.

  9. Click Jenkins > sample-app, in the top menu.

    You should see the master branch now has a job created for it.

    The first run of the job will fail, until the project name is set properlyin the Jenkinsfile next step.

Phase 3: Modify Jenkinsfile, then build and test the app

  1. Create a branch for the canary environment called canary

    git checkout -b canary

    Output (do not copy):

    Switched to a new branch 'canary'
    

    The Jenkinsfile iswritten using the Jenkins Workflow DSL, which is Groovy-based. It allows anentire build pipeline to be expressed in a single script that lives alongsideyour source code and supports powerful features like parallelization, stages,and user input.

  2. Update your Jenkinsfile script with the correct PROJECT environment value.

    Be sure to replace REPLACE_WITH_YOUR_PROJECT_ID with your project name.

    Save your changes, but don't commit the new Jenkinsfile change just yet.You'll make one more change in the next section, then commit and push themtogether.

Phase 4: Deploy a canary release to canary

Now that your pipeline is working, it's time to make a change to the gceme appand let your pipeline test, package, and deploy it.

The canary environment is rolled out as a percentage of the pods behind theproduction load balancer. In this case we have 1 out of 5 of our frontendsrunning the canary code and the other 4 running the production code. This allowsyou to ensure that the canary code is not negatively affecting users beforerolling out to your full fleet. You can use thelabels env: production andenv: canary in Google Cloud Monitoring in order to monitor the performance ofeach version individually.

  1. In the sample-app repository on your workstation open html.go and replacethe word blue with orange (there should be exactly two occurrences):
//snip
<div class="card orange">
<div class="card-content white-text">
<div class="card-title">Backend that serviced this request</div>
//snip
  1. In the same repository, open main.go and change the version number from1.0.0 to 2.0.0:

    //snip
    const version string = "2.0.0"
    //snip
  2. Push the version 2 changes to the repo:

    git add Jenkinsfile html.go main.go
    git commit -m "Version 2"
    git push origin canary
  3. Revisit your sample-app in the Jenkins UI.

    Navigate back to your Jenkins sample-app job. Notice a canary pipelinejob has been created.

  4. Follow the canary build output.

    • Click the Canary link.
    • Click the #1 link the Build History box, on the lower left.
    • Click Console Output from the left-side menu.
    • Scroll down to follow.
  5. Track the output for a few minutes.

    When you see Finished: SUCCESS, open the Cloud Shell terminal that youleft polling /version of canary. Observe that some requests are nowhandled by the canary 2.0.0 version.

    1.0.0
    1.0.0
    1.0.0
    1.0.0
    2.0.0
    2.0.0
    1.0.0
    1.0.0
    1.0.0
    1.0.0
    

    You have now rolled out that change, version 2.0.0, to a subset of users.

  6. Continue the rollout, to the rest of your users.

    Back in the other Cloud Shell terminal, create a branch calledproduction, then push it to the Git server.

    git checkout master
     git merge canary
     git push origin master
  7. Watch the pipelines in the Jenkins UI handle the change.

    Within a minute or so, you should see a new job in the Build Queue and Build Executor.

  8. Clicking on the master link will show you the stages of your pipeline aswell as pass/fail and timing characteristics.

    You can see the failed master job #1, and the successful master job #2.

  9. Check the Cloud Shell terminal responses again.

    In Cloud Shell, open the terminal polling canary's /version URL and observethat the new version, 2.0.0, has been rolled out and is serving allrequests.

    2.0.0
    2.0.0
    2.0.0
    2.0.0
    2.0.0
    2.0.0
    2.0.0
    2.0.0
    2.0.0
    2.0.0
    

If you want to understand the pipeline stages in greater detail, you canlook through the Jenkinsfile in the sample-app project directory.

Phase 5: Deploy a development branch

Oftentimes changes will not be so trivial that they can be pushed directly tothe canary environment. In order to create a development environment,from a long lived feature branch, all you need to do is push it up to the Gitserver. Jenkins will automatically deploy your development environment.

In this case you will not use a loadbalancer, so you'll have to access yourapplication using kubectl proxy. This proxy authenticates itself with theKubernetes API and proxies requests from your local machine to the service inthe cluster without exposing your service to the internet.

Deploy the development branch

  1. Create another branch and push it up to the Git server

    git checkout -b new-feature
    git push origin new-feature
  2. Open Jenkins in your web browser and navigate back to sample-app.

    You should see that a new job called new-feature has been created,and this job is creating your new environment.

  3. Navigate to the console output of the first build of this new job by:

    • Click the new-feature link in the job list.
    • Click the #1 link in the Build History list on the left of the page.
    • Finally click the Console Output link in the left menu.
  4. Scroll to the bottom of the console output of the job to seeinstructions for accessing your environment:

    Successfully verified extensions/v1beta1/Deployment: gceme-frontend-dev
    AvailableReplicas = 1, MinimumReplicas = 1
    
    [Pipeline] echo
    To access your environment run `kubectl proxy`
    [Pipeline] echo
    Then access your service via
    http://localhost:8001/api/v1/proxy/namespaces/new-feature/services/gceme-frontend:80/
    [Pipeline] }
    

Access the development branch

  1. Set up port forwarding to the dev frontend, from Cloud Shell:

    export DEV_POD_NAME=$(kubectl get pods -n new-feature -l "app=gceme,env=dev,role=frontend" -o jsonpath="{.items[0].metadata.name}")
    kubectl port-forward -n new-feature $DEV_POD_NAME 8001:80 >> /dev/null &
  2. Access your application via localhost:

    curl http://localhost:8001/api/v1/proxy/namespaces/new-feature/services/gceme-frontend:80/

    Output (do not copy):

    <!doctype html>
    <html>
    ...
    </div>
    <div class="col s2">&nbsp;</div>
    </div>
    </div>
    </html>
    

    Look through the response output for "card orange" that was changed earlier.

  3. You can now push code changes to the new-feature branch in order to updateyour development environment.

  4. Once you are done, merge your new-feature branch back into the canarybranch to deploy that code to the canary environment:

    git checkout canary
    git merge new-feature
    git push origin canary
  5. When you are confident that your code won't wreak havoc in production, mergefrom the canary branch to the master branch. Your code will beautomatically rolled out in the production environment:

    git checkout master
    git merge canary
    git push origin master
  6. When you are done with your development branch, delete it from CloudSource Repositories, then delete the environment in Kubernetes:

    git push origin :new-feature
    kubectl delete ns new-feature

Extra credit: deploy a breaking change, then roll back

Make a breaking change to the gceme source, push it, and deploy it through thepipeline to production. Then pretend latency spiked after the deployment and youwant to roll back. Do it! Faster!

Things to consider:

  • What is the Docker image you want to deploy for roll back?
  • How can you interact directly with the Kubernetes to trigger the deployment?
  • Is SRE really what you want to do with your life?

Clean up

Clean up is really easy, but also super important: if you don't follow theseinstructions, you will continue to be billed for the GKE cluster you created.

To clean up, navigate to theGoogle Developers Console Project List,choose the project you created for this lab, and delete it. That's it.

  • 问题1: [root@master ~]# kubectl get cs Warning: v1 ComponentStatus is deprecated in v1.19+ NAME STATUS MESSAGE

  • 概述 什么是 Kubernetes? image 核心层:Kubernetes最核心的功能,对外提供API构建高层的应用,最内提供插件式应用执行环境 应用层:部署(无状态应用、有状态应用、批处理任务、集群应用等)和路由(服务发现、DNS解析等) 管理层:系统度量(如基础设施、容器和网络的度量),自动化(如自动扩展、动态Provision等)以及策略管理(RBAC、Quota、PSP、Network

  • 基于 Kubernetes 的 Jenkins 主从通信异常解决 问题描述 基于 Kubernetes 部署 Jenkins 动态 slave 后,运行 Jenkins Job 会抛java.nio.channels.ClosedChannelException 异常完整的异常栈如下: FATAL: java.nio.channels.ClosedChannelException java.nio

 相关资料
  • ECS Reference Architecture: Continuous Deployment The ECS Continuous Deployment reference architecture demonstrates how to achievecontinuous deployment of an application to AmazonElastic Container Ser

  • 在本章,我将会把应用部署到Linux服务器, 与上一章相比,这种是比较传统的方式 本章的GitHub链接为: Source, Diff, Zip Vultr 当提到“传统托管”时,意思是应用是手动或通过原始服务器机器上的脚本安装部署的。 该过程涉及安装应用程序、其依赖项和生产规模的Web服务器,并配置系统以确保其安全。 当你要部署自己的项目时,要问的第一个问题是在哪找服务器。 目前有很多经济的托管

  • 在本章,我会将应用部署到Heroku云平台 许多云托管提供商提供了一个应用程序可以运行的托管平台。 你只需提供部署到这些平台上的实际应用程序,因为硬件,操作系统,脚本语言解释器,数据库等都由该服务管理。 这种服务称为平台即服务(PaaS)。 Heroku,这是一种流行的云托管服务,对Python、Go、Nodejs等应用程序支持都很好,关键还免费,而且默认支持HTTPS 本章的GitHub链接为:

  • Here are the simple steps needed to create and run an web.py application. Install web.py and flups Create the application as documented if __name__ == "__main__": web.run(urls, globals()) For

  • Yarn can easily be used in various continuous integration systems. To speed up builds, the Yarn cache directory can be saved across builds. AppVeyor Yarn is preinstalled on AppVeyor, so you don’t need

  • 简述 Deployment 为 Pod 和 ReplicaSet 提供了一个声明式定义(declarative)方法,用来替代以前的ReplicationController 来方便的管理应用。典型的应用场景包括: 定义Deployment来创建Pod和ReplicaSet 滚动升级和回滚应用 扩容和缩容 暂停和继续Deployment 比如一个简单的nginx应用可以定义为 apiVersion