This plugins adds Jenkins pipeline steps to interact with the AWS API.
see the changelog for release information
This plugin is not optimized to setups with a primary and multiple agents.Only steps that touch the workspace are executed on the agents while the rest is executed on the master.
For the best experience make sure that primary and agents have the same IAM permission and networking capabilities.
By default, credentials lookup is done on the master node for all steps.To enable credentials lookup on the current node, enable Retrieve credentials from node
in Jenkins global configuration. This is globally applicable and restricts all access to the master's credentials.
the withAWS
step provides authorization for the nested steps.You can provide region and profile information or let Jenkinsassume a role in another or the same AWS account.You can mix all parameters in one withAWS
block.
Set region information (note that region and endpointUrl are mutually exclusive):
withAWS(region:'eu-west-1') {
// do something
}
Use provided endpointUrl (endpointUrl is optional, however, region and endpointUrl are mutually exclusive):
withAWS(endpointUrl:'https://minio.mycompany.com',credentials:'nameOfSystemCredentials',federatedUserId:"${submitter}@${releaseVersion}") {
// do something
}
Use Jenkins UsernamePassword credentials information (Username: AccessKeyId, Password: SecretAccessKey):
withAWS(credentials:'IDofSystemCredentials') {
// do something
}
Use Jenkins AWS credentials information (AWS Access Key: AccessKeyId, AWS Secret Key: SecretAccessKey):
withAWS(credentials:'IDofAwsCredentials') {
// do something
}
Use profile information from ~/.aws/config
:
withAWS(profile:'myProfile') {
// do something
}
Assume role information (account is optional - uses current account as default. externalId, roleSessionName and policy are optional. duration is optional - if specified it represents the maximum amount of time in seconds the session may persist for, defaults to 3600.):
withAWS(role:'admin', roleAccount:'123456789012', externalId: 'my-external-id', policy: '{"Version":"2012-10-17","Statement":[{"Sid":"Stmt1","Effect":"Deny","Action":"s3:DeleteObject","Resource":"*"}]}', duration: 3600, roleSessionName: 'my-custom-session-name') {
// do something
}
Assume federated user id information (federatedUserId is optional - if specified it generates a set of temporary credentials and allows you to push a federated user id into cloud trail for auditing. duration is optional - if specified it represents the maximum amount of time in seconds the session may persist for, defaults to 3600.):
withAWS(region:'eu-central-1',credentials:'nameOfSystemCredentials',federatedUserId:"${submitter}@${releaseVersion}", duration: 3600) {
// do something
}
Authentication with a SAML assertion (fetched from your company IdP) by assuming a role
withAWS(role: 'myRole', roleAccount: '123456789', principalArn: 'arn:aws:iam::123456789:saml-provider/test', samlAssertion: 'base64SAML', region:'eu-west-1') {
// do something
}
Authentication by retrieving credentials from the node in scope
node('myNode') { // Credentials will be fetched from this node
withAWS(role: 'myRole', roleAccount: '123456789', region:'eu-west-1', useNode: true) {
// do something
}
}
When you use Jenkins Declarative Pipelines you can also use withAWS
in an options block:
options {
withAWS(profile:'myProfile')
}
stages {
...
}
Print current AWS identity information to the log.
The step returns an objects with the following fields:
def identity = awsIdentity()
Invalidate given paths in CloudFront distribution.
cfInvalidate(distribution:'someDistributionId', paths:['/*'])
cfInvalidate(distribution:'someDistributionId', paths:['/*'], waitForCompletion: true)
All s3* steps take an optional pathStyleAccessEnabled and payloadSigningEnabled boolean parameter.
s3Upload(pathStyleAccessEnabled: true, payloadSigningEnabled: true, file:'file.txt', bucket:'my-bucket', path:'path/to/target/file.txt')
s3Copy(pathStyleAccessEnabled: true, fromBucket:'my-bucket', fromPath:'path/to/source/file.txt', toBucket:'other-bucket', toPath:'path/to/destination/file.txt')
s3Delete(pathStyleAccessEnabled: true, bucket:'my-bucket', path:'path/to/source/file.txt')
s3Download(pathStyleAccessEnabled: true, file:'file.txt', bucket:'my-bucket', path:'path/to/source/file.txt', force:true)
exists = s3DoesObjectExist(pathStyleAccessEnabled: true, bucket:'my-bucket', path:'path/to/source/file.txt')
files = s3FindFiles(pathStyleAccessEnabled: true, bucket:'my-bucket')
Upload a file/folder from the workspace (or a String) to an S3 bucket.If the file
parameter denotes a directory, the complete directory including all subfolders will be uploaded.
s3Upload(file:'file.txt', bucket:'my-bucket', path:'path/to/target/file.txt')
s3Upload(file:'someFolder', bucket:'my-bucket', path:'path/to/targetFolder/')
Another way to use it is with include/exclude patterns which are applied in the specified subdirectory (workingDir
).The option accepts a comma-separated list of patterns.
s3Upload(bucket:"my-bucket", path:'path/to/targetFolder/', includePathPattern:'**/*', workingDir:'dist', excludePathPattern:'**/*.svg,**/*.jpg')
Specific user metadatas can be added to uploaded files
s3Upload(bucket:"my-bucket", path:'path/to/targetFolder/', includePathPattern:'**/*.svg', workingDir:'dist', metadatas:['Key:SomeValue','Another:Value'])
Specific cachecontrol can be added to uploaded files
s3Upload(bucket:"my-bucket", path:'path/to/targetFolder/', includePathPattern:'**/*.svg', workingDir:'dist', cacheControl:'public,max-age=31536000')
Specific content encoding can be added to uploaded files
s3Upload(file:'file.txt', bucket:'my-bucket', contentEncoding: 'gzip')
Specific content type can be added to uploaded files
s3Upload(bucket:"my-bucket", path:'path/to/targetFolder/', includePathPattern:'**/*.ttf', workingDir:'dist', contentType:'application/x-font-ttf', contentDisposition:'attachment')
Canned ACLs can be added to upload requests.
s3Upload(file:'file.txt', bucket:'my-bucket', path:'path/to/target/file.txt', acl:'PublicRead')
s3Upload(file:'someFolder', bucket:'my-bucket', path:'path/to/targetFolder/', acl:'BucketOwnerFullControl')
A Server Side Encryption Algorithm can be added to upload requests.
s3Upload(file:'file.txt', bucket:'my-bucket', path:'path/to/target/file.txt', sseAlgorithm:'AES256')
A KMS alias or KMS id can be used to encrypt the uploaded file or directory at rest.
s3Upload(file: 'foo.txt', bucket: 'my-bucket', path: 'path/to/target/file.txt', kmsId: 'alias/foo')
s3Upload(file: 'foo.txt', bucket: 'my-bucket', path: 'path/to/target/file.txt', kmsId: '8e1d420d-bf94-4a15-a07a-8ad965abb30f')
s3upload(file: 'bar-dir', bucket: 'my-bucket', path: 'path/to/target', kmsId: 'alias/bar')
A redirect location can be added to uploaded files.
s3Upload(file: 'file.txt', bucket: 'my-bucket', redirectLocation: '/redirect')
Creating an S3 object by creating the file whose contents is the provided text argument.
s3Upload(path: 'file.txt', bucket: 'my-bucket', text: 'Some Text Content')
s3Upload(path: 'path/to/targetFolder/file.txt', bucket: 'my-bucket', text: 'Some Text Content')
Tags can be added to uploaded files.
s3Upload(file: 'file.txt', bucket: 'my-bucket', tags: '[tag1:value1, tag2:value2]')
def tags=[:]
tags["tag1"]="value1"
tags["tag2"]="value2"
s3Upload(file: 'file.txt', bucket: 'my-bucket', tags: tags.toString())
Log messages can be less verbose. Disable it when you feel the logs are excessive but you will lose the visibility of what files having been uploaded to S3.
s3Upload(path: 'source/path/', bucket: 'my-bucket', verbose: false)
Download a file/folder from S3 to the local workspace.Set optional parameter force
to true
to overwrite existing file in workspace.If the path
ends with a /
the complete virtual directory will be downloaded.
s3Download(file:'file.txt', bucket:'my-bucket', path:'path/to/source/file.txt', force:true)
s3Download(file:'targetFolder/', bucket:'my-bucket', path:'path/to/sourceFolder/', force:true)
Copy file between S3 buckets.
s3Copy(fromBucket:'my-bucket', fromPath:'path/to/source/file.txt', toBucket:'other-bucket', toPath:'path/to/destination/file.txt')
Delete a file/folder from S3.If the path ends in a "/", then the path will be interpreted to be a folder, and all of its contents will be removed.
s3Delete(bucket:'my-bucket', path:'path/to/source/file.txt')
s3Delete(bucket:'my-bucket', path:'path/to/sourceFolder/')
Check if object exists in S3 bucket.
exists = s3DoesObjectExist(bucket:'my-bucket', path:'path/to/source/file.txt')
This provides a way to query the files/folders in the S3 bucket, analogous to the findFiles
step provided by "pipeline-utility-steps-plugin".If specified, the path
limits the scope of the operation to that folder only.The glob
parameter tells s3FindFiles
what to look for. This can be a file name, a full path to a file, or a standard glob ("*", "*.ext", "path/**/file.ext", etc.).
If you do not specify path
, then it will default to the root of the bucket.The path is assumed to be a folder; you do not need to end it with a "/", but it is okay if you do.The path
property of the results will be relative to this value.
This works by enumerating every file/folder in the S3 bucket under path
and then performing glob matching.When possible, you should use path
to limit the search space for efficiency purposes.
If you do not specify glob
, then it will default to "*".
By default, this will return both files and folders.To only return files, set the onlyFiles
parameter to true
.
files = s3FindFiles(bucket:'my-bucket')
files = s3FindFiles(bucket:'my-bucket', glob:'path/to/targetFolder/file.ext')
files = s3FindFiles(bucket:'my-bucket', path:'path/to/targetFolder/', glob:'file.ext')
files = s3FindFiles(bucket:'my-bucket', path:'path/to/targetFolder/', glob:'*.ext')
files = s3FindFiles(bucket:'my-bucket', path:'path/', glob:'**/file.ext')
s3FindFiles
returns an array of FileWrapper
objects exactly identical to those returned by findFiles
.
Each FileWrapper
object has the following properties:
name
: the filename portion of the path (for "path/to/my/file.ext", this would be "file.ext")path
: the full path of the file, relative to the path
specified (for path
="path/to/", this property of the file "path/to/my/file.ext" would be "my/file.ext")directory
: true if this is a directory; false otherwiselength
: the length of the file (this is always "0" for directories)lastModified
: the last modification timestamp, in milliseconds since the Unix epoch (this is always "0" for directories)When used in a string context, a FileWrapper
object returns the value of its path
property.
Will presign the bucket/key and return a url. Defaults to 1 minute duration, using GET.
def url = s3PresignURL(bucket: 'mybucket', key: 'mykey')
The duration can be overridden:
def url = s3PresignURL(bucket: 'mybucket', key: 'mykey', durationInSeconds: 300) //5 minutes
The method can also be overridden:
def url = s3PresignURL(bucket: 'mybucket', key: 'mykey', httpMethod: 'POST')
Validates the given CloudFormation template.
def response = cfnValidate(file:'template.yaml')
echo "template description: ${response.description}"
Create or update the given CloudFormation stack using the given template from the workspace.You can specify an optional list of parameters, either as a key/value pair or a map.You can also specify a list of keepParams
of parameters which will use the previous value on stack updates.
Using timeoutInMinutes
you can specify the amount of time that can pass before the stack status becomes CREATE_FAILED and the stack gets rolled back.Due to limitations in the AWS API, this only applies to stack creation.
If you have many parameters you can specify a paramsFile
containing the parameters. The format is either a standardJSON file like with the cli or a YAML file for the cfn-params command line utility.
Additionally you can specify a list of tags that are set on the stack and all resources created by CloudFormation.
The step returns the outputs of the stack as a map. It also contains special values prefixed with jenkins
:
jenkinsStackUpdateStatus
- "true"/"false" whether the stack was modified or notWhen cfnUpdate creates a stack and the creation fails, the stack is deleted instead of being left in a broken state.
To prevent running into rate limiting on the AWS API you can change the default polling interval of 1000 ms using the parameter pollIntervall
. Using the value 0
disables event printing.
def outputs = cfnUpdate(stack:'my-stack', file:'template.yaml', params:['InstanceType=t2.nano'], keepParams:['Version'], timeoutInMinutes:10, tags:['TagName=Value'], notificationARNs:['arn:aws:sns:us-east-1:993852309656:topic'], pollInterval:1000)
or the parameters can be specified as a map:
def outputs = cfnUpdate(stack:'my-stack', file:'template.yaml', params:['InstanceType': 't2.nano'], keepParams:['Version'], timeoutInMinutes:10, tags:['TagName=Value'], notificationARNs:['arn:aws:sns:us-east-1:993852309656:topic'], pollInterval:1000)
Alternatively, you can specify a URL to a template on S3 (you'll need this if you hit the 51200 byte limit on template):
def outputs = cfnUpdate(stack:'my-stack', url:'https://s3.amazonaws.com/my-templates-bucket/template.yaml')
By default the cfnUpdate
step creates a new stack if the specified stack does not exist, this behaviour can be overridden by passing create: 'false'
as parameter :
def outputs = cfnUpdate(stack:'my-stack', url:'https://s3.amazonaws.com/my-templates-bucket/template.yaml', create: 'false')
In above example if my-stack
already exists it would be updated and if it doesnt exist no actions would be performed.
In a case where CloudFormation needs to use a different IAM Role for creating the stack than the one currently in effect, you can pass the complete Role ARN to be used as roleArn
parameter. i.e:
def outputs = cfnUpdate(stack:'my-stack', url:'https://s3.amazonaws.com/my-templates-bucket/template.yaml', roleArn: 'arn:aws:iam::123456789012:role/S3Access')
It's possible to override the behaviour of a stack when the creation fails by using "onFailure". Allowed values are DO_NOTHING, ROLLBACK, or DELETEBecause the normal default value of ROLLBACK behaves strangely in a CI/CD environment. cfnUpdate uses DELETE as default.
def outputs = cfnUpdate(stack:'my-stack', url:'https://s3.amazonaws.com/my-templates-bucket/template.yaml', onFailure:'DELETE')
You can specify rollback triggers for the stack update:
def outputs = cfnUpdate(stack:'my-stack', url:'https://s3.amazonaws.com/my-templates-bucket/template.yaml', rollbackTimeoutInMinutes: 10, rollbackTriggers: ['AWS::CloudWatch::Alarm=arn:of:cloudwatch:alarm'])
When creating a stack, you can activate termination protection by using the enableTerminationProtection
field:
def outputs = cfnUpdate(stack:'my-stack', url:'https://s3.amazonaws.com/my-templates-bucket/template.yaml', enableTerminationProtection: true)
Note: When creating a stack, either file
or url
are required. When updating it, omitting both parameters will keep the stack's current template.
Remove the given stack from CloudFormation.
To prevent running into rate limiting on the AWS API you can change the default polling interval of 1000 ms using the parameter pollIntervall
. Using the value 0
disables event printing.
Note: When deleting a stack only 'stack' parameter is required.
cfnDelete(stack:'my-stack', pollInterval:1000, retainResources :['mylogicalid'], roleArn: 'my-arn', clientRequestToken: 'my-request-token')
The step returns the outputs of the stack as map.
def outputs = cfnDescribe(stack:'my-stack')
The step returns the global CloudFormation exports as map.
def globalExports = cfnExports()
Create a change set to update the given CloudFormation stack using the given template from the workspace.You can specify an optional list of parameters, either as a key/value pair or a map.You can also specify a list of keepParams
of parameters which will use the previous value on stack updates.
If you have many parameters you can specify a paramsFile
containing the parameters. The format is either a standardJSON file like with the cli or a YAML file for the cfn-params command line utility.
Additionally you can specify a list of tags that are set on the stack and all resources created by CloudFormation.
The step returns the outputs of the stack as a map. It also contains special values prefixed with jenkins
:
jenkinsStackUpdateStatus
- "true"/"false" whether the stack was modified or notTo prevent running into rate limiting on the AWS API you can change the default polling interval of 1000 ms using the parameter pollIntervall
. Using the value 0
disables event printing.
cfnCreateChangeSet(stack:'my-stack', changeSet:'my-change-set', file:'template.yaml', params:['InstanceType=t2.nano'], keepParams:['Version'], tags:['TagName=Value'], notificationARNs:['arn:aws:sns:us-east-1:993852309656:topic'], pollInterval:1000)
or the parameters can be specified as a map:
cfnCreateChangeSet(stack:'my-stack', changeSet:'my-change-set', file:'template.yaml', params:['InstanceType': 't2.nano'], keepParams:['Version'], tags:['TagName=Value'], notificationARNs:['arn:aws:sns:us-east-1:993852309656:topic'], pollInterval:1000)
Alternatively, you can specify a URL to a template on S3 (you'll need this if you hit the 51200 byte limit on template):
cfnCreateChangeSet(stack:'my-stack', changeSet:'my-change-set', url:'https://s3.amazonaws.com/my-templates-bucket/template.yaml')
or specify a raw template:
cfnCreateChangeSet(stack:'my-stack', changeSet:'my-change-set', template: 'my template body')
By default the cfnCreateChangeSet
step creates a change set for creating a new stack if the specified stack does not exist, this behaviour can be overridden by passing create: 'false'
as parameter :
cfnCreateChangeSet(stack:'my-stack', changeSet:'my-change-set', url:'https://s3.amazonaws.com/my-templates-bucket/template.yaml', create: 'false')
In above example if my-stack
already exists, a change set stack with change set will be created, and if it doesnt exist no actions would be performed.
In a case where CloudFormation needs to use a different IAM Role for creating or updating the stack than the one currently in effect, you can pass the complete Role ARN to be used as roleArn
parameter. i.e:
cfnCreateChangeSet(stack:'my-stack', changeSet:'my-change-set', url:'https://s3.amazonaws.com/my-templates-bucket/template.yaml', roleArn: 'arn:aws:iam::123456789012:role/S3Access')
You can specify rollback triggers for the stack update:
cfnCreateChangeSet(stack:'my-stack', changeSet:'my-change-set', url:'https://s3.amazonaws.com/my-templates-bucket/template.yaml', rollbackTimeoutInMinutes: 10, rollbackTriggers: ['AWS::CloudWatch::Alarm=arn:of:cloudwatch:alarm'])
Note: When creating a change set for a non-existing stack, either file
or url
are required. When updating it, omitting both parameters will keep the stack's current template.
Execute a previously created change set to create or update a CloudFormation stack. All the necessary information, like parameters and tags, were provided earlier when the change set was created.
To prevent running into rate limiting on the AWS API you can change the default polling interval of 1000 ms using the parameter pollIntervall
. Using the value 0
disables event printing.
def outputs = cfnExecuteChangeSet(stack:'my-stack', changeSet:'my-change-set', pollInterval:1000)
Create a stack set. Similar options to cfnUpdate. Will monitor the resulting StackSet operation and will fail the build step if the operation does not complete successfully.
To prevent running into rate limiting on the AWS API you can change the default polling interval of 1000 ms using the parameter pollIntervall
. Using the value 0
disables event printing.
cfnUpdateStackSet(stackSet:'myStackSet', url:'https://s3.amazonaws.com/my-templates-bucket/template.yaml')
To set a custom administrator role ARN:
cfnUpdateStackSet(stackSet:'myStackSet', url:'https://s3.amazonaws.com/my-templates-bucket/template.yaml', administratorRoleArn: 'mycustomarn')
To set a operation preferences:
cfnUpdateStackSet(stackSet:'myStackSet', url:'https://s3.amazonaws.com/my-templates-bucket/template.yaml', operationPreferences: [failureToleranceCount: 5])
When the stack set gets really big, the recommendation from AWS is to batch the update requests. This option is not part of the AWS API, but is an implementation to facilitate updating a large stack set.To automatically batch via region (find all stack instances, group them by region, and submit each region separately): (
cfnUpdateStackSet(stackSet:'myStackSet', url:'https://s3.amazonaws.com/my-templates-bucket/template.yaml', batchingOptions: [regions: true])
Deletes a stack set.
To prevent running into rate limiting on the AWS API you can change the default polling interval of 1000 ms using the parameter pollIntervall
. Using the value 0
disables event printing.
cfnDeleteStackSet(stackSet:'myStackSet')
Publishes a message to SNS.Note that the optional parameter messageAttributes
is assuming string only values.
snsPublish(topicArn:'arn:aws:sns:us-east-1:123456789012:MyNewTopic', subject:'my subject', message:'this is your message', messageAttributes: ['k1': 'v1', 'k2': 'v2'])
Deploys an API Gateway definition to a stage.
deployAPI(api:'myApiId', stage:'Prod')
Additionally you can specify a description and stage variables.
deployAPI(api:'myApiId', stage:'Prod', description:"Build: ${env.BUILD_ID}", variables:['key=value'])
Deploys an application revision through the specified deployment group (AWS CodeDeploy)
From S3 bucket:
createDeployment(
s3Bucket: 'jenkins.bucket',
s3Key: 'artifacts/SimpleWebApp.zip',
s3BundleType: 'zip', // [Valid values: tar | tgz | zip | YAML | JSON]
applicationName: 'SampleWebApp',
deploymentGroupName: 'SampleDeploymentGroup',
deploymentConfigName: 'CodeDeployDefault.AllAtOnce',
description: 'Test deploy',
waitForCompletion: 'true',
//Optional values
ignoreApplicationStopFailures: 'false',
fileExistsBehavior: 'OVERWRITE'// [Valid values: DISALLOW, OVERWRITE, RETAIN]
)
From GitHub:
createDeployment(
gitHubRepository: 'MykhayloGnylorybov/AwsCodeDeployArtifact',
gitHubCommitId: 'e9ee742f44c9a0f97ee3aa94593e7b6aad6e2d14',
applicationName: 'SampleWebApp',
deploymentGroupName: 'SampleDeploymentGroup',
deploymentConfigName: 'CodeDeployDefault.AllAtOnce',
description: 'Test deploy',
waitForCompletion: 'true'
)
Awaits for a CodeDeploy deployment to complete.
The step runs within the withAWS
block and requires only one parameter:
Simple await:
awaitDeploymentCompletion('d-3GR0HQLDN')
Timed await:
timeout(time: 15, unit: 'MINUTES'){
awaitDeploymentCompletion('d-3GR0HQLDN')
}
Retrieves the list of all AWS accounts of the organization. This step can only be run in the master account.
The step returns an array of Account objects with the following fields:
def accounts = listAWSAccounts()
You can specify a parent id (Root, Orga unit) with the optional parameter parent
def accounts = listAWSAccounts('ou-1234-12345678')
Create or update a SAML identity provider with the given metadata document.
The step returns the ARN of the created identity provider.
def idp = updateIdP(name: 'nameToCreateOrUpdate', metadata: 'pathToMetadataFile')
Update the assume role trust policy of the given role using the provided file.
updateTrustPolicy(roleName: 'SomeRole', policyFile: 'path/to/somefile.json')
Create or update the AWS account alias.
setAccountAlias(name: 'awsAlias')
Delete images in a repository.
ecrDeleteImages(repositoryName: 'foo', imageIds: ['imageDigest': 'digest', 'imageTag': 'tag'])
List images in a repository.
def images = ecrListImages(repositoryName: 'foo')
Create login string to authenticate docker with the ECR.
The step returns the shell command to perform the login.
def login = ecrLogin()
For older versions of docker that need the email parameter use:
def login = ecrLogin(email:true)
It's also possible to specify AWS accounts to perform ECR login into:
def login = ecrLogin(registryIds: ['123456789', '987654321'])
Sets the json policy document containing ECR permissions.
The step returns the object returned by the command.
def result = ecrSetRepositoryPolicy(registryId: 'my-registryId',
repositoryName: 'my-repositoryName',
policyText: 'json-policyText'
)
def policyFile ="${env.WORKSPACE}/policyText.json"
def policyText = readFile file: policyFile
def result = ecrSetRepositoryPolicy(registryId: 'my-registryId',
repositoryName: 'my-repositoryName',
policyText: policyText
)
Invoke a Lambda function.
The step returns the object returned by the Lambda.
def result = invokeLambda(
functionName: 'myLambdaFunction',
payload: [ "key": "value", "anotherkey" : [ "another", "value"] ]
)
Alternatively payload and return value can be Strings instead of Objects:
String result = invokeLambda(
functionName: 'myLambdaFunction',
payloadAsString: '{"key": "value"}',
returnValueAsString: true
)
Cleans up lambda function versions older than the daysAgo flag.The main use case around this is for tooling like AWS Serverless Application Model.It creates lambda functions, but marks them as DeletionPolicy: Retain
so the versions are never deleted.Overtime, these unused versions will accumulate and the account/region might hit the limit for maximum storage of lambda functions.
lambdaVersionCleanup(
functionName: 'myLambdaFunction',
daysAgo: 14
)
To discover and delete all old versions of functions created by a AWS CloudFormation stack:
lambdaVersionCleanup(
stackName: 'myStack',
daysAgo: 14
)
Share an AMI image to one or more accounts
ec2ShareAmi(
amiId: 'ami-23842',
accountIds: [ "0123456789", "1234567890" ]
)
Registers a target to a Target Group.
elbRegisterInstance(
targetGroupARN: 'arn:aws:elasticloadbalancing:us-west-2:123456789:targetgroup/my-load-balancer/123456789',
instanceID: 'i-myid',
port: 8080
)
Deregisters a target from a Target Group.
elbDeregisterInstance(
targetGroupARN: 'arn:aws:elasticloadbalancing:us-west-2:123456789:targetgroup/my-load-balancer/123456789',
instanceID: 'i-myid',
port: 8080
)
Check if target has registered and healthy.
The step returns true or false.
elbIsInstanceRegistered(
targetGroupARN: 'arn:aws:elasticloadbalancing:us-west-2:123456789:targetgroup/my-load-balancer/123456789',
instanceID: 'i-myid',
port: 8080
)
Check if target has completed removed from the Target Group.
The step returns true or false.
elbIsInstanceDeregistered(
targetGroupARN: 'arn:aws:elasticloadbalancing:us-west-2:123456789:targetgroup/my-load-balancer/123456789',
instanceID: 'i-myid',
port: 8080
)
Creates a new Elastic Beanstalk application.
Arguments:
ebCreateApplication(
applicatName: "my-application",
description: "My first application"
)
Creates a new deployable version for an existing Elastic Beanstalk application.This version created is based on files uploaded to an S3 bucket, that are used to create a deployable version of the application.This version label can be used to deploy a new environment.
Arguments:
ebCreateApplicationVersion(
applicationName: "my-application",
versionLabel: "my-application-1.0.0",
s3Bucket: "my-bucket",
s3Key: "my-application.jar",
description: "My first application version"
)
Creates a new deployable version for an existing Elastic Beanstalk application.This version created is based on files uploaded to an S3 bucket, that are used to create a deployable version of the application.This version label can be used to deploy a new environment.
Arguments:
// Create configuration template based on existing environment
ebCreateConfigurationTemplate(
applicationName: "my-application",
templateName: "my-application-production-template",
environmentId: "my-application-production",
description: "Configuration template for the production environment of my application"
)
// Create configuration template based on a solution stack
ebCreateConfigurationTemplate(
applicationName: "my-application",
templateName: "my-application-production-template",
solutionStackName: "64bit Amazon Linux 2018.03 v3.3.9 running Tomcat 8.5 Java 8",
description: "Configuration template for the production environment of my application"
)
// Create configuration template based on an existing configuration template
ebCreateConfigurationTemplate(
applicationName: "my-application",
templateName: "my-application-production-template",
sourceConfigurationApplication: "my-other-application",
sourceConfigurationTemplate: "my-other-application-production-template",
description: "Configuration template for the production environment of my application"
)
Creates a new environment for an existing Elastic Beanstalk application.This environment can be created based on existing configuration templates and application versions for that application.
Arguments:
// Create environment from existing configuration template
ebCreateEnvironment(
applicationName: "my-application",
environmentName: "production",
templateName: "my-application-production-template",
versionLabel: "my-application-1.0.0",
description: "Production environment of my application"
)
// Create environment with no configuration template, using a Supported Platform string
ebCreateEnvironment(
applicationName: "my-application",
environmentName: "production",
solutionStackName: "64bit Amazon Linux 2018.03 v3.3.9 running Tomcat 8.5 Java 8",
versionLabel: "my-application-1.0.0",
description: "Production environment of my application"
)
Swaps the CNAMEs of the environments. This is useful for Blue-Green deployments.
Arguments:
// Swap CNAMEs using Ids
ebSwapEnvironmentCNAMEs(
sourceEnvironmentId: "e-65abcdefgh",
destinationEnvironmentId: "e-66zxcvbdg"
)
// Swap CNAMEs using the environment names
ebCreateEnvironment(
sourceEnvironmentName: "production",
destinationEnvironmentName: "production-2"
)
// Swap CNAMEs using the source environment name and destination environment CNAME
ebCreateEnvironment(
sourceEnvironmentName: "green",
destinationEnvironmentCNAME: "production.eu-west-1.elasticbeanstalk.com"
)
Waits for environment to be in the specified status.
This can be used to ensure that the environment is ready to accept commands, like an update, or a termination command.Be aware this does not guarantee that the application has finished starting up.If an application has a long startup time, the environment will be ready for new commands before the application has finished the boot.
Arguments:
Launching | Updating | Ready | Terminating | Terminated
. Defaults to Ready// Wait for environment to be ready for new commands
ebWaitOnEnvironmentStatus(
applicationName: "my-application",
environmentName: "production"
)
// Wait for environment to be terminated
ebWaitOnEnvironmentStatus(
applicationName: "my-application",
environmentName: "temporary",
status: "Terminated"
)
Waits for environment to reach the desired health status, and remain there for a minimum amount of time.
This can be used to ensure that the environment has finished the startup process, and that the web application is ready and available.
Arguments:
Green | Yellow | Red | Grey
. Defaults to Green// Wait for environment health to be green for at least 1 minute
ebWaitOnEnvironmentHealth(
applicationName: "my-application",
environmentName: "production"
)
// Detect immediately if environment becomes red
ebWaitOnEnvironmentHealth(
applicationName: "my-application",
environmentName: "temporary",
health: "Red",
stabilityThreshold: 0
)
ebSwapEnvironmentCNAMEs
command that lookup the required id and name paramsebCreateApplication, ebCreateApplicationVersion, ebCreateConfigurationTemplate, ebCreateEnvironment, ebSwapEnvironmentCNAMEs, ebWaitOnEnvironmentStatus, ebWaitOnEnvironmentHealth
)createDeployment
stepcfnExecuteChangeSet
when no resource change (#210)registryIds
argument to ecrLogin
notificationARNs
argument to cfnUpdate
and cfnUpdateStackSet
Stopped
status for CodeDeployment deploymentss3DoesObjectExist
stepparent
argument to listAWSAccounts
text
option to s3Upload
s3Upload
jenkinsStackUpdateStatus
to stack outputs. Specifies if stack was modifiedregion
was now mandatory on withAWS
cfnExecuteChangeSet
(#132)s3Upload
stepadministratorRoleArn
to cfnUpdateStackSetcfnUpdate
enableTerminationProtection
to cfnUpdate
s3Copy
stepSynchronousNonBlockingStepExecution
for some steps for better error handlingcfnExecuteChange
step (#67)ec2ShareAmi
stepkmsId
parameter to s3Upload
.s3Upload
did not work in Jenkins 2.102+ (#JENKINS-49025)withAWS
step for assume role. (#JENKINS-45807)setAccountAlias
broken during code cleanupwithAWS
step for assume role.onFailure
option when creating a stack to allow changed behaviour.setAccountAlias
stepcfnCreateChangeSet
stepcfnExecuteChangeSet
stepinvokeLambda
stepinvokeLambda
is now serializableawsIdentity
stepecrLogin
stepinvokeLambda
stepcacheControl
to s3Upload
stepS3Upload
: workingDir
, includePathPattern
, excludePathPattern
, metadatas
and acl
cfnUpdate
cfnUpdate
safeName
to listAWSAccounts
steps3FindFiles
stepupdateIdP
stepawaitDeploymentCompletion
steps3Delete
steplistAWSAccounts
step1、Multi-AZ: 高可用. (多AZ) 2、NAT: 网络地址转换. (NAT网关) 3、Auto Scaling: 弹性伸缩. (AS) 4 、API:应用程序接口. 5、Lambda:无服务器计算服务. (ServerLess Application) 6、Amazon CloudFront: 是一个 Web 服务,它加快将静态和动态 Web 内容(如 .html、.css、.js
GitLab Merge Request触发Jenkins pipeline 前言 推荐通过Jenkins Multi branch pipeline实现该功能,更加简单。参见: 用Jenkins Multi-branch Pipeline实现多分支构建和多环境部署 在 GitLab push自动触发Jenkins构建与持续集成 文章中,我们介绍了GitLab push触发Jenkins构建。 与
Sections pipeline agent 顶级和阶段级代理之间的差异 顶级代理 阶段级代理 Parameters Common Options post Conditions stages steps Directives environment Supported Credentials Type options 可用选项 stage options Available Stage Opt
我试图通过AWS CDK与Lambda建立一个CI/CD管道。我们在这里使用gradle项目。此外,我遵循了示例留档。我们定义了两个堆栈,它们是APIStack和ApiStackPipeline,其中APIStack由处理,ApiStackPipeline由处理。 我们在ApiStack中初始化Lambda函数; 在这种情况下,我们设置参数与所示的留档相同(即使我不确定它是如何得到的)。 现在我们
Banzai Pipeline, or simply Pipeline is a tabletop reef break located in Hawaii, Oahu's North Shore. The most famous and infamous reef in the universe is the benchmark by which all other waves are meas
当Item在Spider中被收集之后,它将会被传递到Item Pipeline,一些组件会按照一定的顺序执行对Item的处理。 每个item pipeline组件(有时称之为“Item Pipeline”)是实现了简单方法的Python类。他们接收到Item并通过它执行一些行为,同时也决定此Item是否继续通过pipeline,或是被丢弃而不再进行处理。 以下是item pipeline的一些典型
Asset Pipeline 本文介绍 Asset Pipeline。 读完本文,你将学到: Asset Pipeline 是什么以及其作用; 如何合理组织程序的静态资源; Asset Pipeline 的优势; 如何向 Asset Pipeline 中添加预处理器; 如何在 gem 中打包静态资源; 1 Asset Pipeline 是什么? Asset Pipeline 提供了一个框架,用于连
HTML-Pipeline 是一个轻量级框架,可用于将用户的内容转换为 HTML 格式。 HTML-Pipeline 提供了一些可链接的、基于 DOM 的 HTML 过滤器,且支持将过滤器组成管道。 用法 过滤器采用 HTML 字符串或 Nokogiri::HTML::DocumentFragment,可以对内容进行操作,然后输出对应的结果。 如,将 Markdown 源代码转换为 Markdow
ember-pipeline Railway oriented programming in Ember. To install: ember install ember-pipeline Philosophy ember-pipeline allows you to compose a pipeline of (promise aware) methods on an object using