Jenkins plugin to run dynamic agents in a Kubernetes cluster.
Based on the Scaling Docker with Kubernetes article,automates the scaling of Jenkins agents running in Kubernetes.
The plugin creates a Kubernetes Pod for each agent started, and stops it after each build.
Agents are launched as inbound agents, so it is expected that the container connects automatically to the Jenkins controller.For that some environment variables are automatically injected:
JENKINS_URL
: Jenkins web interface urlJENKINS_SECRET
: the secret key for authenticationJENKINS_AGENT_NAME
: the name of the Jenkins agentJENKINS_NAME
: the name of the Jenkins agent (Deprecated. Only here for backwards compatibility)Tested with jenkins/inbound-agent
,see the Docker image source code.
It is not required to run the Jenkins controller inside Kubernetes.
Fill in the Kubernetes plugin configuration.In order to do that, you will open the Jenkins UI and navigate to Manage Jenkins -> Manage Nodes and Clouds -> Configure Clouds -> Add a new cloud -> Kubernetes and enter the Kubernetes URL and Jenkins URL appropriately, unless Jenkins is running in Kubernetes in which case the defaults work.
Supported credentials include:
To test this connection is successful you can use the Test Connection button to ensure there isadequate communication from Jenkins to the Kubernetes cluster, as seen below
In addition to that, in the Kubernetes Pod Template section, we need to configure the image that will be used tospin up the agent pod. We do not recommend overriding the jnlp
container except under unusual circumstances.for your agent, you can use the default Jenkins agent image available in Docker Hub. In the‘Kubernetes Pod Template’ section you need to specify the following (the rest of the configuration is up to you):Kubernetes Pod Template Name - can be any and will be shown as a prefix for unique generated agent’ names, which willbe run automatically during buildsDocker image - the docker image name that will be used as a reference to spin up a new Jenkins agent, as seen below
If you check WebSocket then agents will connect over HTTP(S) rather than the Jenkins service TCP port.This is unnecessary when the Jenkins controller runs in the same Kubernetes cluster,but can greatly simplify setup when agents are in an external clusterand the Jenkins controller is not directly accessible (for example, it is behind a reverse proxy).See JEP-222 for more.
Note: if your Jenkins controller is outside the cluster and uses a self-signed HTTPS certificate,you will need some additional configuration.
Clouds can be configured to only allow certain jobs to use them.
To enable this, in your cloud's advanced configuration check theRestrict pipeline support to authorized folders
box. For a job to thenuse this cloud configuration you will need to add it in the jobs folder's configuration.
The Kubernetes plugin allocates Jenkins agents in Kubernetes pods. Within these pods, there is always one specialcontainer jnlp
that is running the Jenkins agent. Other containers can run arbitrary processes of your choosing,and it is possible to run commands dynamically in any container in the agent pod.
Pod templates defined using the user interface declare a label. When a freestyle job or a pipeline job usingnode('some-label')
uses a label declared by a pod template, the Kubernetes Cloud allocates a new pod to run theJenkins agent.
It should be noted that the main reason to use the global pod template definition is to migrate a huge corpus ofexisting projects (including freestyle) to run on Kubernetes without changing job definitions.New users setting up new Kubernetes builds should use the podTemplate
step as shown in the example snippetshere.
The podTemplate
step defines an ephemeral pod template. It is created while the pipeline execution is within thepodTemplate
block. It is immediately deleted afterwards. Such pod templates are not intended to be shared with otherbuilds or projects in the Jenkins instance.
The following idiom creates a pod template with a generated unique label (available as POD_LABEL
) and runs commands inside it.
podTemplate {
node(POD_LABEL) {
// pipeline steps...
}
}
Commands will be executed by default in the jnlp
container, where the Jenkins agent is running.(The jnlp
name is historical and is retained for compatibility.)
This will run in the jnlp
container:
podTemplate {
node(POD_LABEL) {
stage('Run shell') {
sh 'echo hello world'
}
}
}
Find more examples in the examples dir.
The default jnlp agent image used can be customized by adding it to the template
containerTemplate(name: 'jnlp', image: 'jenkins/inbound-agent:4.7-1', args: '${computer.jnlpmac} ${computer.name}'),
or with the yaml syntax
apiVersion: v1
kind: Pod
spec:
containers:
- name: jnlp
image: 'jenkins/inbound-agent:4.7-1'
args: ['\$(JENKINS_SECRET)', '\$(JENKINS_NAME)']
Multiple containers can be defined for the agent pod, with shared resources, like mounts. Ports in each container canbe accessed as in any Kubernetes pod, by using localhost
.
The container
step allows executing commands into each container.
podTemplate(containers: [
containerTemplate(name: 'maven', image: 'maven:3.8.1-jdk-8', command: 'sleep', args: '99d'),
containerTemplate(name: 'golang', image: 'golang:1.16.5', command: 'sleep', args: '99d')
]) {
node(POD_LABEL) {
stage('Get a Maven project') {
git 'https://github.com/jenkinsci/kubernetes-plugin.git'
container('maven') {
stage('Build a Maven project') {
sh 'mvn -B -ntp clean install'
}
}
}
stage('Get a Golang project') {
git url: 'https://github.com/hashicorp/terraform.git', branch: 'main'
container('golang') {
stage('Build a Go project') {
sh '''
mkdir -p /go/src/github.com/hashicorp
ln -s `pwd` /go/src/github.com/hashicorp/terraform
cd /go/src/github.com/hashicorp/terraform && make
'''
}
}
}
}
}
or
podTemplate(yaml: '''
apiVersion: v1
kind: Pod
spec:
containers:
- name: maven
image: maven:3.8.1-jdk-8
command:
- sleep
args:
- 99d
- name: golang
image: golang:1.16.5
command:
- sleep
args:
- 99d
''') {
node(POD_LABEL) {
stage('Get a Maven project') {
git 'https://github.com/jenkinsci/kubernetes-plugin.git'
container('maven') {
stage('Build a Maven project') {
sh 'mvn -B -ntp clean install'
}
}
}
stage('Get a Golang project') {
git url: 'https://github.com/hashicorp/terraform-provider-google.git', branch: 'main'
container('golang') {
stage('Build a Go project') {
sh '''
mkdir -p /go/src/github.com/hashicorp
ln -s `pwd` /go/src/github.com/hashicorp/terraform
cd /go/src/github.com/hashicorp/terraform && make
'''
}
}
}
}
}
POD_CONTAINER
variableThe variable POD_CONTAINER
contains the name of the container in the current context.It is defined only within a container
block.
podTemplate(containers: […]) {
node(POD_LABEL) {
stage('Run shell') {
container('mycontainer') {
sh "echo hello from $POD_CONTAINER" // displays 'hello from mycontainer'
}
}
}
}
Pod templates are used to create agents. They can be either configured via the user interface, or in a pipeline, usingthe podTemplate
step.Either way it provides access to the following fields:
kubernetes
node
step. In a pipeline, it is recommended to omit this field and rely on the generated label that can be referred to using the POD_LABEL
variable defined within the podTemplate
block.merge()
or override()
. Controls whether the yaml definition overrides or is merged with the yaml definition inherited from pod templates declared with inheritFrom
. Defaults to override()
(for backward compatibility reasons).NORMAL
or EXCLUSIVE
, this controls whether Jenkins only schedules jobs with label expressions matching or use the node as much as possible.activeDeadlineSeconds
has passed.podRetention
is set to never()
or onFailure()
, the pod is deleted after this deadline is passed.true
emptyDirWorkspaceVolume
(default): an empty dir allocated on the host machinedynamicPVC()
: a persistent volume claim managed dynamically. It is deleted at the same time as the pod.hostPathWorkspaceVolume()
: a host path volumenfsWorkspaceVolume()
: a nfs volumepersistentVolumeClaimWorkspaceVolume()
: an existing persistent volume claim by name.Container templates are part of pod. They can be configured via the user interface or in a pipeline and allow you to set the following fields:
sleep
.99999999
.By default, the agent connection timeout is set to 1000 seconds. It can be customized using a system property. Please refer to the section below.
In order to support any possible value in Kubernetes Pod
object, we can pass a yaml snippet that will be used as a basefor the template. If any other properties are set outside the YAML, they will take precedence.
podTemplate(yaml: '''
apiVersion: v1
kind: Pod
metadata:
labels:
some-label: some-label-value
spec:
containers:
- name: busybox
image: busybox
command:
- sleep
args:
- 99d
''') {
node(POD_LABEL) {
container('busybox') {
echo POD_CONTAINER // displays 'busybox'
sh 'hostname'
}
}
}
You can use readFile
or readTrusted
steps to load the yaml from a file.Also note that in declarative pipelines the yamlFile
can be used (see this example).
pod.yaml
apiVersion: v1
kind: Pod
spec:
containers:
- name: maven
image: maven:3.8.1-jdk-8
command:
- sleep
args:
- 99d
- name: golang
image: golang:1.16.5
command:
- sleep
args:
- 99d
Jenkinsfile
podTemplate(yaml: readTrusted('pod.yaml')) {
node(POD_LABEL) {
// ...
}
}
containerTemplate(name: 'busybox', image: 'busybox', command: 'sleep', args: '99d',
livenessProbe: containerLivenessProbe(execArgs: 'some --command', initialDelaySeconds: 30, timeoutSeconds: 1, failureThreshold: 3, periodSeconds: 10, successThreshold: 1)
)
See Defining a liveness command for more details.
A pod template may or may not inherit from an existing template.This means that the pod template will inherit node selector, service account, image pull secrets, container templatesand volumes from the template it inherits from.
yaml is merged according to the value of yamlMergeStrategy
.
Service account and Node selector when are overridden completely substitute any possible value found on the 'parent'.
Container templates that are added to the podTemplate, that has a matching containerTemplate (a container templatewith the same name) in the 'parent' template, will inherit the configuration of the parent containerTemplate.If no matching container template is found, the template is added as is.
Volume inheritance works exactly as Container templates.
Image Pull Secrets are combined (all secrets defined both on 'parent' and 'current' template are used).
In the example below, we will inherit from a pod template we created previously, and will just override the version ofmaven
so that it uses jdk-11 instead:
podTemplate(inheritFrom: 'mypod', containers: [
containerTemplate(name: 'maven', image: 'maven:3.8.1-jdk-11')
]) {
node(POD_LABEL) {
…
}
}
Or in declarative pipeline
pipeline {
agent {
kubernetes {
inheritFrom 'mypod'
yaml '''
spec:
containers:
- name: maven
image: maven:3.8.1-jdk-11
'''
…
}
}
stages {
…
}
}
Note that we only need to specify the things that are different. So, command
and arguments
are not specified, asthey are inherited.Also, the golang
container will be added as defined in the 'parent' template.
Field inheritFrom
may refer a single podTemplate or multiple separated by space. In the later case each template willbe processed in the order they appear in the list (later items overriding earlier ones).In any case if the referenced template is not found it will be ignored.
Field inheritFrom
provides an easy way to compose podTemplates that have been pre-configured. In many cases it wouldbe useful to define and compose podTemplates directly in the pipeline using groovy.This is made possible via nesting. You can nest multiple pod templates together in order to compose a single one.
The example below composes two different pod templates in order to create one with maven and docker capabilities.
podTemplate(containers: [containerTemplate(image: 'docker', name: 'docker', command: 'cat', ttyEnabled: true)]) {
podTemplate(containers: [containerTemplate(image: 'maven', name: 'maven', command: 'cat', ttyEnabled: true)]) {
node(POD_LABEL) { // gets a pod with both docker and maven
…
}
}
}
This feature is extra useful, pipeline library developers as it allows you to wrap pod templates into functions and letusers nest those functions according to their needs.
For example one could create functions for their podTemplates and import them for use.Say here's our file src/com/foo/utils/PodTemplates.groovy
:
package com.foo.utils
public void dockerTemplate(body) {
podTemplate(
containers: [containerTemplate(name: 'docker', image: 'docker', command: 'sleep', args: '99d')],
volumes: [hostPathVolume(hostPath: '/var/run/docker.sock', mountPath: '/var/run/docker.sock')]) {
body.call()
}
}
public void mavenTemplate(body) {
podTemplate(
containers: [containerTemplate(name: 'maven', image: 'maven', command: 'sleep', args: '99d')],
volumes: [secretVolume(secretName: 'maven-settings', mountPath: '/root/.m2'),
persistentVolumeClaim(claimName: 'maven-local-repo', mountPath: '/root/.m2repo')]) {
body.call()
}
}
return this
Then consumers of the library could just express the need for a maven pod with docker capabilities by combining the two,however once again, you will need to express the specific container you wish to execute commands in.You can NOT omit the node
statement.
Note that POD_LABEL
will be the innermost generated label to get a node which has all the outer pods available on thenode, as shown in this example:
import com.foo.utils.PodTemplates
podTemplates = new PodTemplates()
podTemplates.dockerTemplate {
podTemplates.mavenTemplate {
node(POD_LABEL) {
container('docker') {
sh "echo hello from $POD_CONTAINER" // displays 'hello from docker'
}
container('maven') {
sh "echo hello from $POD_CONTAINER" // displays 'hello from maven'
}
}
}
}
In scripted pipelines, there are cases where this implicit inheritance via nested declaration is not wanted or anotherexplicit inheritance is preferred.In this case, use inheritFrom ''
to remove any inheritance, or inheritFrom 'otherParent'
to override it.
Declarative agents can be defined from yaml
pipeline {
agent {
kubernetes {
yaml '''
apiVersion: v1
kind: Pod
metadata:
labels:
some-label: some-label-value
spec:
containers:
- name: maven
image: maven:alpine
command:
- cat
tty: true
- name: busybox
image: busybox
command:
- cat
tty: true
'''
}
}
stages {
stage('Run maven') {
steps {
container('maven') {
sh 'mvn -version'
}
container('busybox') {
sh '/bin/busybox'
}
}
}
}
}
or using yamlFile
to keep the pod template in a separate KubernetesPod.yaml
file
pipeline {
agent {
kubernetes {
yamlFile 'KubernetesPod.yaml'
}
}
stages {
…
}
}
Note that it was previously possible to define containerTemplate
but that has been deprecated in favor of the yaml format.
pipeline {
agent {
kubernetes {
//cloud 'kubernetes'
containerTemplate {
name 'maven'
image 'maven:3.8.1-jdk-8'
command 'sleep'
args '99d'
}
}
}
stages {
…
}
}
Run steps within a container by default. Steps will be nested within an implicit container(name) {...}
block insteadof being executed in the jnlp container.
pipeline {
agent {
kubernetes {
defaultContainer 'maven'
yamlFile 'KubernetesPod.yaml'
}
}
stages {
stage('Run maven') {
steps {
sh 'mvn -version'
}
}
}
}
Run the Pipeline or individual stage within a custom workspace - not required unless explicitly stated.
pipeline {
agent {
kubernetes {
customWorkspace 'some/other/path'
defaultContainer 'maven'
yamlFile 'KubernetesPod.yaml'
}
}
stages {
stage('Run maven') {
steps {
sh 'mvn -version'
sh "echo Workspace dir is ${pwd()}"
}
}
}
}
Unlike scripted k8s template, declarative templates do not inherit from parent template.Since the agents declared at stage level can override a global agent, implicit inheritance was leading to confusion.
You need to explicitly declare the inheritance if necessary using the field inheritFrom
.
In the following example, nested-pod
will only contain the maven
container.
pipeline {
agent {
kubernetes {
yaml '''
spec:
containers:
- name: golang
image: golang:1.16.5
command:
- sleep
args:
- 99d
'''
}
}
stages {
stage('Run maven') {
agent {
kubernetes {
yaml '''
spec:
containers:
- name: maven
image: maven:3.8.1-jdk-8
command:
- sleep
args:
- 99d
'''
}
}
steps {
…
}
}
}
}
If you use the containerTemplate
to run some service in the background(e.g. a database for your integration tests), you might want to access its log from the pipeline.This can be done with the containerLog
step, which prints the log of therequested container to the build log.
podTemplate
. Parameter namecan be omitted in simple usage:containerLog 'mongodb'
false
)Also see the online help and examples/containerLog.groovy.
Please read Features controlled by system properties page to know how to set up system properties within Jenkins.
KUBERNETES_JENKINS_URL
: Jenkins URL to be used by agents. This is meant to be used for OEM integration.io.jenkins.plugins.kubernetes.disableNoDelayProvisioning
(since 1.19.1) Whether to disable the no-delay provisioning strategy the plugin uses (defaults to false
).jenkins.host.address
: (for unit tests) controls the host agents should use to contact Jenkinsorg.csanchez.jenkins.plugins.kubernetes.PodTemplate.connectionTimeout
: The time in seconds to wait before considering the pod scheduling has failed (defaults to 1000
)org.csanchez.jenkins.plugins.kubernetes.pipeline.ContainerExecDecorator.stdinBufferSize
: stdin buffer size in bytes for commands sent to Kubernetes exec api. A low value will cause slowness in commands executed. A higher value will consume more memory (defaults to 16*1024
)org.csanchez.jenkins.plugins.kubernetes.pipeline.ContainerExecDecorator.websocketConnectionTimeout
: Time to wait for the websocket used by container
step to connect (defaults to 30
)OpenShift runs containers using a random UID that is overriding what is specified in Docker images.For this reason, you may end up with the following warning in your build
[WARNING] HOME is set to / in the jnlp container. You may encounter troubles when using tools or ssh client. This usually happens if the uid doesnt have any entry in /etc/passwd. Please add a user to your Dockerfile or set the HOME environment variable to a valid directory in the pod template definition.
At the moment the jenkinsci agent image is not built for OpenShift and will issue this warning.
This issue can be circumvented in various ways:
/home/jenkins
and mount a volume to /home/jenkins
to ensure the user running the container can write to itSee this example configuration.
OpenShift 3 is based on an older version of Kubernetes, which is not anymore directly supported since Kubernetes plugin version 1.26.0.
To get agents working for Openshift 3, add this Node Selector
to your Pod Templates:
beta.kubernetes.io/os=linux
You can run pods on Windows if your cluster has Windows nodes.See the example.
Multiple containers can be defined in a pod.One of them is automatically created with name jnlp
, and runs the Jenkins JNLP agent service, with args ${computer.jnlpmac} ${computer.name}
,and will be the container acting as Jenkins agent.
Other containers must run a long running process, so the container does not exit. If the default entrypoint or commandjust runs something and exit then it should be overridden with something like cat
with ttyEnabled: true
.
WARNINGIf you want to provide your own Docker image for the inbound agent, you must name the container jnlp
so it overrides the default one. Failing to do so will result in two agents trying to concurrently connect to the controller.
Create and start minikube
The client certificate needs to be converted to PKCS, will need a password
openssl pkcs12 -export -out ~/.minikube/minikube.pfx -inkey ~/.minikube/apiserver.key -in ~/.minikube/apiserver.crt -certfile ~/.minikube/ca.crt -passout pass:secret
Validate that the certificates work
curl --cacert ~/.minikube/ca.crt --cert ~/.minikube/minikube.pfx:secret --cert-type P12 https://$(minikube ip):8443
Add a Jenkins credential of type certificate, upload it from ~/.minikube/minikube.pfx
, password secret
Fill Kubernetes server certificate key with the contents of ~/.minikube/ca.crt
Create a cluster
gcloud container clusters create jenkins --num-nodes 1 --machine-type g1-small
and note the admin password and server certificate.
Or use Google Developer Console to create a Container Engine cluster, then run
gcloud container clusters get-credentials jenkins
kubectl config view --raw
the last command will output kubernetes cluster configuration including API server URL, admin password and root certificate
First watch if the Jenkins agent pods are started.Make sure you are in the correct cluster and namespace.
kubectl get -a pods --watch
If they are in a different state than Running
, use describe
to get the events
kubectl describe pods/my-jenkins-agent
If they are Running
, use logs
to get the log output
kubectl logs -f pods/my-jenkins-agent jnlp
If pods are not started or for any other error, check the logs on the controller side.
For more detail, configure a new Jenkins log recorder fororg.csanchez.jenkins.plugins.kubernetes
at ALL
level.
To inspect the json messages sent back and forth to the Kubernetes API server you can configurea new Jenkins log recorder for okhttp3
at DEBUG
level.
kubectl get pods -o name --selector=jenkins=slave --all-namespaces | xargs -I {} kubectl delete {}
sh
step hangs when multiple containers are usedTo debug this you need to set -Dorg.jenkinsci.plugins.durabletask.BourneShellScript.LAUNCH_DIAGNOSTICS=true
system propertyand then restart the pipeline. Most likely in the console log you will see the following:
sh: can't create /home/jenkins/agent/workspace/thejob@tmp/durable-e0b7cd27/jenkins-log.txt: Permission denied
sh: can't create /home/jenkins/agent/workspace/thejob@tmp/durable-e0b7cd27/jenkins-result.txt.tmp: Permission denied
mv: can't rename '/home/jenkins/agent/workspace/thejob@tmp/durable-e0b7cd27/jenkins-result.txt.tmp': No such file or directory
touch: /home/jenkins/agent/workspace/thejob@tmp/durable-e0b7cd27/jenkins-log.txt: Permission denied
touch: /home/jenkins/agent/workspace/thejob@tmp/durable-e0b7cd27/jenkins-log.txt: Permission denied
touch: /home/jenkins/agent/workspace/thejob@tmp/durable-e0b7cd27/jenkins-log.txt: Permission denied
Usually this happens when UID of the user in jnlp
container differs from the one in another container(s).All containers you use should have the same UID of the user, also this can be achieved by setting securityContext
:
apiVersion: v1
kind: Pod
spec:
securityContext:
runAsUser: 1000 # default UID of jenkins user in agent image
containers:
- name: maven
image: maven:3.8.1-jdk-8
command:
- cat
tty: true
Using WebSockets is the easiest and recommended way to establish the connection between agents and a Jenkins controller running outside the cluster.However, if your Jenkins controller has HTTPS configured with self-signed certificate, you'll need to make sure the agent container trusts the CA.To do that, you can extend the jenkins/inbound-agent
image and add your certificate as follows:
FROM jenkins/inbound-agent
USER root
ADD cert.pem /tmp/cert.pem
RUN keytool -noprompt -storepass changeit \
-keystore "$JAVA_HOME/jre/lib/security/cacerts" \
-import -file /tmp/cert.pem -alias jenkinsMaster && \
rm -f /tmp/cert.pem
USER jenkins
Then, use it as the jnlp
container for the pod template as usual. No command or args need to be specified.
Note: when using the WebSocket mode, the
-disableHttpsCertValidation
on thejenkins/inbound-agent
becomes unavailable, as well as-cert
, and that's why you have to extend the docker image.
Integration tests will use the currently configured context auto-detected from kube config file or service account.
Run mvn clean install
and copy target/kubernetes.hpi
to Jenkins plugins folder.
Please note that the system you run mvn
on needs to be reachable from the cluster.If you see the agents happen to connect to the wrong host, see you can usejenkins.host.address
as mentioned above.
For integration tests install and start minikube.Tests will detect it and run a set of integration tests in a new namespace.
Some integration tests run a local jenkins, so the host that runs them needsto be accessible from the kubernetes cluster.By default Jenkins will listen on 192.168.64.1
interface only, for security reasons.If your minikube is not running in that network, pass connectorHost
to maven, ie.
mvn clean install -DconnectorHost=$(minikube ip | sed -e 's/\([0-9]*\.[0-9]*\.[0-9]*\).*/\1.1/')
If you don't mind others in your network being able to use your test jenkins you could just use this:
mvn clean install -DconnectorHost=0.0.0.0
Then your test jenkins will listen on all ip addresses so that the build pods will be able to connect from the pods in your minikube VM to your host.
If your minikube is running in a VM (e.g. on virtualbox) and the host running mvn
does not have a public hostname for the VM to access, you can set the jenkins.host.address
system property to the (host-only or NAT) IP of your host:
mvn clean install -Djenkins.host.address=192.168.99.1
If Microk8s is running and is the default context in your ~/.kube/config
,just run as
mvn clean install -Pmicrok8s
This assumes that from a pod, the host system is accessible as IP address 10.1.1.1
.It might be some variant such as 10.1.37.1
,in which case you would need to set -DconnectorHost=… -Djenkins.host.address=…
instead.To see the actual address, try:
ifdata -pa cni0
Or to verify the networking inside a pod:
kubectl run --rm --image=praqma/network-multitool --restart=Never --attach sh ip route | fgrep 'default via'
Try
bash test-in-k8s.sh
Docker image for Jenkins, with plugin installed.Based on the official image.
docker run --rm --name jenkins -p 8080:8080 -p 50000:50000 -v /var/jenkins_home csanchez/jenkins-kubernetes
The example configuration will create a stateful set running Jenkins with persistent volumeand using a service account to authenticate to Kubernetes API.
A local testing cluster with one node can be created with minikube
minikube start
You may need to set the correct permissions for host mounted volumes
minikube ssh
sudo chown 1000:1000 /tmp/hostpath-provisioner/pvc-*
Then create the Jenkins namespace, controller and Service with
kubectl create namespace kubernetes-plugin
kubectl config set-context $(kubectl config current-context) --namespace=kubernetes-plugin
kubectl create -f src/main/kubernetes/service-account.yml
kubectl create -f src/main/kubernetes/jenkins.yml
Get the url to connect to with
minikube service jenkins --namespace kubernetes-plugin --url
Assuming you created a Kubernetes cluster named jenkins
this is how to run both Jenkins and agents there.
Creating all the elements and setting the default namespace
kubectl create namespace kubernetes-plugin
kubectl config set-context $(kubectl config current-context) --namespace=kubernetes-plugin
kubectl create -f src/main/kubernetes/service-account.yml
kubectl create -f src/main/kubernetes/jenkins.yml
Connect to the ip of the network load balancer created by Kubernetes, port 80.Get the ip (in this case 104.197.19.100
) with kubectl describe services/jenkins
(it may take a bit to populate)
$ kubectl describe services/jenkins
Name: jenkins
Namespace: default
Labels: <none>
Selector: name=jenkins
Type: LoadBalancer
IP: 10.175.244.232
LoadBalancer Ingress: 104.197.19.100
Port: http 80/TCP
NodePort: http 30080/TCP
Endpoints: 10.172.1.5:8080
Port: agent 50000/TCP
NodePort: agent 32081/TCP
Endpoints: 10.172.1.5:50000
Session Affinity: None
No events.
Until Kubernetes 1.4 removes the SNATing of source ips, seems that CSRF (enabled by default in Jenkins 2)needs to be configured to avoid WARNING: No valid crumb was included in request
errors.This can be done checking Enable proxy compatibility under Manage Jenkins -> Configure Global Security
Configure Jenkins, adding the Kubernetes
cloud under configuration, settingKubernetes URL to the container engine cluster endpoint or simply https://kubernetes.default.svc.cluster.local
.Under credentials, click Add
and select Kubernetes Service Account
,or alternatively use the Kubernetes API username and password. Select 'Certificate' as credentials type if thekubernetes cluster is configured to use client certificates for authentication.
Using Kubernetes Service Account
will cause the plugin to use the default token mounted inside the Jenkins pod. See Configure Service Accounts for Pods for more information.
You may want to set Jenkins URL
to the internal service IP, http://10.175.244.232
in this case,to connect through the internal network.
Set Container Cap
to a reasonable number for tests, i.e. 3.
Add an image with
jenkins/inbound-agent
/home/jenkins/agent
Now it is ready to be used.
Tearing it down
kubectl delete namespace/kubernetes-plugin
Modify file ./src/main/kubernetes/jenkins.yml
with desired limits
resources:
limits:
cpu: 1
memory: 1Gi
requests:
cpu: 0.5
memory: 500Mi
Note: the JVM will use the memory requests
as the heap limit (-Xmx)
docker build -t csanchez/jenkins-kubernetes .
资料:kube-flannel和kubernetes-dashboard创建命令和配置文件内容 kubectl apply -f kube-flannel.yml --- apiVersion: policy/v1beta1 kind: PodSecurityPolicy metadata: name: psp.flannel.unprivileged annotations:
plugin调用函数: 文章https://blog.csdn.net/zhonglinzhang/article/details/82800287 分析volume manager调用 reconcile -> MountVolume -> NewMouter根据插件调用 -> 执行plugin的SetUp volumePlugin接口 存储提供的扩展接口, 包含了各类存储提供
坑1:kubeadm init命令执行失败,无法连接k8s.gcr.io库 执行kubeadmin init 命令: sudo kubeadm init --pod-network-cidr=10.244.0.0/16 出现错误: error execution phase preflight: [preflight] Some fatal errors occurred: [ER
kubectl 其工具已经比较完善,但是对于一些个性化的命令,其宗旨是希望开发者能以独立而紧张形式发布自定义的kubectl子命令,插件的开发语言不限,需要将最终的脚步或二进制可执行程序以kubectl- 的前缀命名,然后放到PATH中即可,可以使用kubectl plugin list查看目前已经安装的插件。 #kubectl plugin https://kubernetes
接着kubelet pod 挂载volume源码分析文章https://blog.csdn.net/zhonglinzhang/article/details/89923875 volumeAttacher, newAttacherErr := attachableVolumePlugin.NewAttacher(),如果使用CSI插件则调用如下: func (p *csiPlug
被external-provisioner发送CreateVolumeRequest GRPC请求执行rbd命令,创建pv 分为三阶段就是 Provision Attach Mount CreateVolume +------------+ DeleteVolume +------------->| CREATED +--------------+ |
Kubectl 操作示例说明 创建资源对象 #根据 yaml 配置文件一次性创建 service 和 rc: [root@Docker /] kubectl create -f my-service.yaml -f my-rc.yaml #根据 <directory> 目录下所有 .yaml、.yal、.json 文件的定义进行创建操作。 [root@Docker /] kubectl cre
Kubernetes (通常称为 K8s) 是来自 Google 云平台的开源容器集群管理系统,用于自动部署、扩展和管理容器化(containerized)应用程序。该系统基于 Docker 构建一个容器的调度服务。 Kubernetes 可以自动在一个容器集群中选择一个工作容器供使用。其核心概念是 Container Pod。详细的设计思路请参考这里。 Kubernetes 由 Google 设
我正在使用Ansible、Docker、Jenkins和Kubernetes实现持续集成和持续部署。我已经使用Ansible和kubespray部署创建了一个具有1个主节点和2个工作节点的Kubernetes集群。我有30-40个微服务应用。我需要创建这么多的服务和部署。 我的困惑 当我使用Kubernetes包管理器Kubernetes Helm chart时,我需要在主节点上启动我的图表,还是
扩展应用 通过修改Deployment中副本的数量(replicas),可以动态扩展或收缩应用: 这些自动扩展的容器会自动加入到service中,而收缩回收的容器也会自动从service中删除。 $ kubectl scale --replicas=3 deployment/nginx-app $ kubectl get deploy NAME DESIRED CURRENT
体验Kubernetes最简单的方法是跑一个nginx容器,然后使用kubectl操作该容器。Kubernetes提供了一个类似于docker run的命令kubectl run,可以方便的创建一个容器(实际上创建的是一个由deployment来管理的Pod): $ kubectl run --image=nginx:alpine nginx-app --port=80
我试图在Kubernetes上运行Spark作为调度程序。 当使用从kubernetes集群外部运行时,它可以正常工作。 但是,每当我们尝试从pod中直接运行spark-shell或spark-submit时,它都不会起作用(即使使用从spark文档中执行rbac也不会起作用。我们有授权执行异常: io.fabric8.kubernetes.client.kubernetesclientExcep
部署单元 依赖方式 架构模式 微服务涉及的技术点 服务发现 服务目录 服务列表 配置中心 服务生命周期 变更,升级 服务依赖关系 链路跟踪 限流 降级 熔断 访问控制 为微服务而生的 Kubernetes Kubernetes 架构 Kubernetes Pod - Sidecar 模式 Kubernetes 支持微服务的一些特性 微服务集大成之 istio Kubernetes 架构 一个状态存