我想建立一个coDepipeline,它将从github获取代码(java)构建一个jar文件并将其部署到aws lamda(或将jar存储在特定的S3桶中)。我只想使用AWS平台提供的工具。
如果我只使用Codebuild,我就能够从github代码构建jar并将其存储到S3(https://docs.aws.amazon.com/codebuild/latest/userguide/getting-started.html)我正在使用deployer lamda函数将代码部署到我的服务lamda。只要S3 bucket deployer中有任何更改,就会触发lamda。
缺点:问题是每次提交对github的更改后,我都必须手动运行codebuild。我希望此代码构建能够自动检测github中的更改。
为了解决上述问题,我做了一个代码管道,它使用github webhooks检测代码更改,但在这里它是创建zip文件而不是jar
所以我实际上是想:
GitHub(更改)---
建筑规范。yml
version: 0.2
phases:
build:
commands:
- echo Build started on `date`
- mvn test
post_build:
commands:
- echo Build completed on `date`
- mvn package
artifacts:
files:
- target/testfunction-1.0.0-jar-with-dependencies.jar
CodePipeline工件位置对于每个管道执行都是不同的,因此它们是隔离的。
我认为你要做的是在CodeBuild中生成一个JAR文件,它最终会以ZIP格式出现在CodePipeline工件中。您可以添加第二个CodeBuild操作,该操作接受第一个CodeBuild操作的输出(CodeBuild操作将为您解压缩输入工件)并部署到S3(这对于使用AWS CLI编写脚本来说非常简单)。
完全可以将这两个代码构建操作结合起来,但我喜欢将“构建”和“部署”步骤分开。
首先,当GitHub提交发生时,设置一个简单的管道来更新lambda时,CodeDeploy令人困惑。不应该这么难。我们创建了以下Lambda函数,该函数可以处理CodePipeline作业构建构件(ZIP),并使用updateFunctionCode将JAR更新推送到Lambda。
import com.amazonaws.services.codepipeline.AWSCodePipeline;
import com.amazonaws.services.codepipeline.AWSCodePipelineClientBuilder;
import com.amazonaws.services.codepipeline.model.FailureDetails;
import com.amazonaws.services.codepipeline.model.PutJobFailureResultRequest;
import com.amazonaws.services.codepipeline.model.PutJobSuccessResultRequest;
import com.amazonaws.services.lambda.AWSLambda;
import com.amazonaws.services.lambda.AWSLambdaClientBuilder;
import com.amazonaws.services.lambda.model.UpdateFunctionCodeRequest;
import com.amazonaws.services.lambda.runtime.Context;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.GetObjectRequest;
import com.amazonaws.services.s3.model.S3Object;
import org.json.JSONObject;
import java.io.ByteArrayOutputStream;
import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;
import java.nio.ByteBuffer;
import java.util.zip.ZipEntry;
import java.util.zip.ZipInputStream;
/**
* Created by jonathan and josh on 1/22/2019.
* <p>
* Process Code Pipeline Job
*/
@SuppressWarnings("unused")
public class CodePipelineLambdaUpdater {
private static AWSCodePipeline codepipeline = null;
private static AmazonS3 s3 = null;
private static AWSLambda lambda = null;
@SuppressWarnings("UnusedParameters")
public void handler(InputStream inputStream, OutputStream outputStream, Context context) throws IOException {
// Read the the job JSON object
String json = new String(readStreamToByteArray(inputStream), "UTF-8");
JSONObject eventJsonObject = new JSONObject(json);
// Extract the jobId first
JSONObject codePiplineJobJsonObject = eventJsonObject.getJSONObject("CodePipeline.job");
String jobId = codePiplineJobJsonObject.getString("id");
// Initialize the code pipeline client if necessary
if (codepipeline == null) {
codepipeline = AWSCodePipelineClientBuilder.defaultClient();
}
if (s3 == null) {
s3 = AmazonS3ClientBuilder.defaultClient();
}
if (lambda == null) {
lambda = AWSLambdaClientBuilder.defaultClient();
}
try {
// The bucketName and objectKey refer to the intermediate ZIP file produced by CodePipeline
String bucketName = codePiplineJobJsonObject.getJSONObject("data").getJSONArray("inputArtifacts").getJSONObject(0).getJSONObject("location").getJSONObject("s3Location").getString("bucketName");
String objectKey = codePiplineJobJsonObject.getJSONObject("data").getJSONArray("inputArtifacts").getJSONObject(0).getJSONObject("location").getJSONObject("s3Location").getString("objectKey");
// The user parameter is the Lambda function name that we want to update. This is configured when adding the CodePipeline Action
String functionName = codePiplineJobJsonObject.getJSONObject("data").getJSONObject("actionConfiguration").getJSONObject("configuration").getString("UserParameters");
System.out.println("bucketName: " + bucketName);
System.out.println("objectKey: " + objectKey);
System.out.println("functionName: " + functionName);
// Download the object
S3Object s3Object = s3.getObject(new GetObjectRequest(bucketName, objectKey));
// Read the JAR out of the ZIP file. Should be the only file for our Java code
ZipInputStream zis = new ZipInputStream(s3Object.getObjectContent());
ZipEntry zipEntry;
byte[] data = null;
//noinspection LoopStatementThatDoesntLoop
while ((zipEntry = zis.getNextEntry()) != null) {
if (zipEntry.getName().endsWith(".jar")) {
System.out.println("zip file: " + zipEntry.getName());
data = readStreamToByteArray(zis);
System.out.println("Length: " + data.length);
break;
}
}
// If we have data then update the function
if (data != null) {
// Update the lambda function
UpdateFunctionCodeRequest updateFunctionCodeRequest = new UpdateFunctionCodeRequest();
updateFunctionCodeRequest.setFunctionName(functionName);
updateFunctionCodeRequest.setPublish(true);
updateFunctionCodeRequest.setZipFile(ByteBuffer.wrap(data));
lambda.updateFunctionCode(updateFunctionCodeRequest);
System.out.println("Updated function: " + functionName);
// Indicate success
PutJobSuccessResultRequest putJobSuccessResultRequest = new PutJobSuccessResultRequest();
putJobSuccessResultRequest.setJobId(jobId);
codepipeline.putJobSuccessResult(putJobSuccessResultRequest);
} else {
// Failre the job
PutJobFailureResultRequest putJobFailureResultRequest = new PutJobFailureResultRequest();
putJobFailureResultRequest.setJobId(jobId);
FailureDetails failureDetails = new FailureDetails();
failureDetails.setMessage("No data available to update function with.");
putJobFailureResultRequest.setFailureDetails(failureDetails);
codepipeline.putJobFailureResult(putJobFailureResultRequest);
}
System.out.println("Finished");
} catch (Throwable e) {
// Handle all other exceptions
System.out.println("Well that ended badly...");
e.printStackTrace();
PutJobFailureResultRequest putJobFailureResultRequest = new PutJobFailureResultRequest();
putJobFailureResultRequest.setJobId(jobId);
FailureDetails failureDetails = new FailureDetails();
failureDetails.setMessage("Failed with error: " + e.getMessage());
putJobFailureResultRequest.setFailureDetails(failureDetails);
codepipeline.putJobFailureResult(putJobFailureResultRequest);
}
}
private static void copy(InputStream in, OutputStream out) throws IOException {
byte[] buffer = new byte[100000];
for (; ; ) {
int rc = in.read(buffer);
if (rc == -1) break;
out.write(buffer, 0, rc);
}
out.flush();
}
private static byte[] readStreamToByteArray(InputStream in) throws IOException {
ByteArrayOutputStream baos = new ByteArrayOutputStream();
try {
copy(in, baos);
} finally {
safeClose(in);
}
return baos.toByteArray();
}
private static InputStream safeClose(InputStream in) {
try {
if (in != null) in.close();
} catch (Throwable ignored) {
}
return null;
}
}
这是项目Maven文件。
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>com.yourcompany</groupId>
<artifactId>codepipeline-lambda-updater</artifactId>
<version>1.0-SNAPSHOT</version>
<dependencyManagement>
<dependencies>
<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>aws-java-sdk-bom</artifactId>
<version>1.11.487</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>
<dependencies>
<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>aws-lambda-java-core</artifactId>
<version>1.1.0</version>
</dependency>
<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>aws-java-sdk-lambda</artifactId>
</dependency>
<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>aws-java-sdk-core</artifactId>
</dependency>
<!-- https://mvnrepository.com/artifact/com.amazonaws/aws-java-sdk-s3 -->
<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>aws-java-sdk-s3</artifactId>
<version>1.11.487</version>
</dependency>
<!-- https://mvnrepository.com/artifact/com.amazonaws/aws-java-sdk-codepipeline -->
<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>aws-java-sdk-codepipeline</artifactId>
<version>1.11.487</version>
</dependency>
<dependency>
<groupId>org.apache.logging.log4j</groupId>
<artifactId>log4j-api</artifactId>
<version>2.10.0</version>
</dependency>
<dependency>
<groupId>org.apache.logging.log4j</groupId>
<artifactId>log4j-core</artifactId>
<version>2.10.0</version>
</dependency>
<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>aws-lambda-java-log4j2</artifactId>
<version>1.0.0</version>
</dependency>
<dependency>
<groupId>org.jetbrains</groupId>
<artifactId>annotations</artifactId>
<version>15.0</version>
</dependency>
<!--<dependency>-->
<!--<groupId>com.google.code.gson</groupId>-->
<!--<artifactId>gson</artifactId>-->
<!--<version>2.8.2</version>-->
<!--</dependency>-->
<!-- https://mvnrepository.com/artifact/org.json/json -->
<dependency>
<groupId>org.json</groupId>
<artifactId>json</artifactId>
<version>20180813</version>
</dependency>
<dependency>
<groupId>org.apache.commons</groupId>
<artifactId>commons-lang3</artifactId>
<version>3.1</version>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<configuration>
<source>1.8</source>
<target>1.8</target>
</configuration>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-shade-plugin</artifactId>
<version>2.4.3</version>
<executions>
<execution>
<phase>package</phase>
<goals>
<goal>shade</goal>
</goals>
<configuration>
<transformers>
<transformer
implementation="com.github.edwgiz.mavenShadePlugin.log4j2CacheTransformer.PluginsCacheFileTransformer">
</transformer>
</transformers>
</configuration>
</execution>
</executions>
<dependencies>
<dependency>
<groupId>com.github.edwgiz</groupId>
<artifactId>maven-shade-plugin.log4j2-cachefile-transformer</artifactId>
<version>2.8.1</version>
</dependency>
</dependencies>
</plugin>
</plugins>
</build>
</project>
这个基线应该让你开始。根据您认为合适的情况,使用进一步的SDK调用对代码进行修饰,以进行更高级的部署。
我正在尝试使用代码构建操作在AWS代码管道上部署AWS CDK应用程序。 构建和部署在本地完美工作(因为它会!)但是在CodeBuild上运行时,命令失败 这很可能是一些琐碎的事情,但我却在挠头,想弄明白是什么! 项目结构是自动生成的(使用) 对于阶段是 是(此阶段的输入目录是阶段的工件,即目录) 阶段抛出cdk ls步骤中的错误。由于上面的构建/部署步骤在本地工作(在干净的签出中),我怀疑这可能
我想为我的AWS基础设施和AWS Lambda函数设置一个CI/CD管道。我们的想法是让一切都在代码中,版本控制和自动化。我只想
我想从OkHttp库源代码构建一个jar文件,以便在我的android应用程序中使用快照版本。我不知道怎么做。 欢迎任何帮助。谢了。
所以,我想创建一个新的BouncyCastle 1.47罐子,它不是OSGi罐子。我已经从他们的站点下载了源代码(JDK1.5-1.7的“JCE with provider and lightweight API”下的bcprov-jdk15on-147.tar.gz文件),但是当我提取它和源代码时,我看不到构建脚本。看看他们的维基,他们说这应该是一个使用ant的简单案例。 以前有人这样做过吗,能
要求创建一个Azure函数,该函数监控到IoT中心的所有传入消息,并在对消息进行一些修改后将其发送到Cosmos数据库 体系结构 问题:无法创建Azure函数,该函数将传入消息读取到IoT Hub并发送到Cosmos DB 遵循的步骤 我已按照以下步骤使用 创建和部署 > 打开Visual Studio代码 转到 已选择 语言 它创建了以下文件 。vscode 。gitignore Functio
我需要为应用程序中的不同对象实例创建单独的日志。例如,如果我们处理书籍,我需要为每本书创建单独的日志文件。它可以与log4j2配合使用。xml文件,但我的内存中可能有数百个这样的对象,我不想创建这么长的配置文件。我想从代码中创建appender和logger。我查找了工作代码示例,但什么也没找到。 我尝试使用RollingFileAppender。createAppender,但未找到如何将其附加