当前位置: 首页 > 软件库 > 云计算 > Serverless 系统 >

terraform-aws-lambda

授权协议 Apache-2.0 License
开发语言 JavaScript
所属分类 云计算、 Serverless 系统
软件类型 开源软件
地区 不详
投 递 者 岑彬炳
操作系统 跨平台
开源组织
适用人群 未知
 软件概览

AWS Lambda Terraform module

Terraform module, which creates almost all supported AWS Lambda resources as well as taking care of building and packaging of required Lambda dependencies for functions and layers.

This Terraform module is the part of serverless.tf framework, which aims to simplify all operations when working with the serverless in Terraform:

  1. Build and install dependencies - read more. Requires Python 3.6 or newer.
  2. Create, store, and use deployment packages - read more.
  3. Create, update, and publish AWS Lambda Function and Lambda Layer - see usage.
  4. Create static and dynamic aliases for AWS Lambda Function - see usage, see modules/alias.
  5. Do complex deployments (eg, rolling, canary, rollbacks, triggers) - read more, see modules/deploy.

Features

  • Build dependencies for your Lambda Function and Layer.
  • Support builds locally and in Docker (with or without SSH agent support for private builds).
  • Create deployment package or deploy existing (previously built package) from local, from S3, from URL, or from AWS ECR repository.
  • Store deployment packages locally or in the S3 bucket.
  • Support almost all features of Lambda resources (function, layer, alias, etc.)
  • Lambda@Edge
  • Conditional creation for many types of resources.
  • Control execution of nearly any step in the process - build, package, store package, deploy, update.
  • Control nearly all aspects of Lambda resources (provisioned concurrency, VPC, EFS, dead-letter notification, tracing, async events, event source mapping, IAM role, IAM policies, and more).
  • Support integration with other serverless.tf modules like HTTP API Gateway (see examples there).

Usage

Lambda Function (store package locally)

module "lambda_function" {
  source = "terraform-aws-modules/lambda/aws"

  function_name = "my-lambda1"
  description   = "My awesome lambda function"
  handler       = "index.lambda_handler"
  runtime       = "python3.8"

  source_path = "../src/lambda-function1"

  tags = {
    Name = "my-lambda1"
  }
}

Lambda Function and Lambda Layer (store packages on S3)

module "lambda_function" {
  source = "terraform-aws-modules/lambda/aws"

  function_name = "lambda-with-layer"
  description   = "My awesome lambda function"
  handler       = "index.lambda_handler"
  runtime       = "python3.8"
  publish       = true

  source_path = "../src/lambda-function1"

  store_on_s3 = true
  s3_bucket   = "my-bucket-id-with-lambda-builds"

  layers = [
    module.lambda_layer_s3.lambda_layer_arn,
  ]

  environment_variables = {
    Serverless = "Terraform"
  }

  tags = {
    Module = "lambda-with-layer"
  }
}

module "lambda_layer_s3" {
  source = "terraform-aws-modules/lambda/aws"

  create_layer = true

  layer_name          = "lambda-layer-s3"
  description         = "My amazing lambda layer (deployed from S3)"
  compatible_runtimes = ["python3.8"]

  source_path = "../src/lambda-layer"

  store_on_s3 = true
  s3_bucket   = "my-bucket-id-with-lambda-builds"
}

Lambda Functions with existing package (prebuilt) stored locally

module "lambda_function_existing_package_local" {
  source = "terraform-aws-modules/lambda/aws"

  function_name = "my-lambda-existing-package-local"
  description   = "My awesome lambda function"
  handler       = "index.lambda_handler"
  runtime       = "python3.8"

  create_package         = false
  local_existing_package = "../existing_package.zip"
}

Lambda Function or Lambda Layer with the deployable artifact maintained separately from the infrastructure

If you want to manage function code and infrastructure resources (such as IAM permissions, policies, events, etc) in separate flows (e.g., different repositories, teams, CI/CD pipelines).

Disable source code tracking to turn off deployments (and rollbacks) using the module by setting ignore_source_code_hash = true and deploy a dummy function.

When the infrastructure and the dummy function is deployed, you can use external tool to update the source code of the function (eg, using AWS CLI) and keep using this module via Terraform to manage the infrastructure.

Be aware that changes in local_existing_package value may trigger deployment via Terraform.

module "lambda_function_externally_managed_package" {
  source = "terraform-aws-modules/lambda/aws"

  function_name = "my-lambda-externally-managed-package"
  description   = "My lambda function code is deployed separately"
  handler       = "index.lambda_handler"
  runtime       = "python3.8"

  create_package         = false
  local_existing_package = "./lambda_functions/code.zip"

  ignore_source_code_hash = true
}

Lambda Function with existing package (prebuilt) stored in S3 bucket

Note that this module does not copy prebuilt packages into S3 bucket. This module can only store packages it builds locally and in S3 bucket.

locals {
  my_function_source = "../path/to/package.zip"
}

resource "aws_s3_bucket" "builds" {
  bucket = "my-builds"
  acl    = "private"
}

resource "aws_s3_bucket_object" "my_function" {
  bucket = aws_s3_bucket.builds.id
  key    = "${filemd5(local.my_function_source)}.zip"
  source = local.my_function_source
}

module "lambda_function_existing_package_s3" {
  source = "terraform-aws-modules/lambda/aws"

  function_name = "my-lambda-existing-package-local"
  description   = "My awesome lambda function"
  handler       = "index.lambda_handler"
  runtime       = "python3.8"

  create_package      = false
  s3_existing_package = {
    bucket = aws_s3_bucket.builds.id
    key    = aws_s3_bucket_object.my_function.id
  }
}

Lambda Functions from Container Image stored on AWS ECR

module "lambda_function_container_image" {
  source = "terraform-aws-modules/lambda/aws"

  function_name = "my-lambda-existing-package-local"
  description   = "My awesome lambda function"

  create_package = false

  image_uri    = "132367819851.dkr.ecr.eu-west-1.amazonaws.com/complete-cow:1.0"
  package_type = "Image"
}

Lambda Layers (store packages locally and on S3)

module "lambda_layer_local" {
  source = "terraform-aws-modules/lambda/aws"

  create_layer = true

  layer_name          = "my-layer-local"
  description         = "My amazing lambda layer (deployed from local)"
  compatible_runtimes = ["python3.8"]

  source_path = "../fixtures/python3.8-app1"
}

module "lambda_layer_s3" {
  source = "terraform-aws-modules/lambda/aws"

  create_layer = true

  layer_name          = "my-layer-s3"
  description         = "My amazing lambda layer (deployed from S3)"
  compatible_runtimes = ["python3.8"]

  source_path = "../fixtures/python3.8-app1"

  store_on_s3 = true
  s3_bucket   = "my-bucket-id-with-lambda-builds"
}

Lambda@Edge

Make sure, you deploy Lambda@Edge functions into US East (N. Virginia) region (us-east-1). See Requirements and Restrictions on Lambda Functions.

module "lambda_at_edge" {
  source = "terraform-aws-modules/lambda/aws"

  lambda_at_edge = true

  function_name = "my-lambda-at-edge"
  description   = "My awesome lambda@edge function"
  handler       = "index.lambda_handler"
  runtime       = "python3.8"

  source_path = "../fixtures/python3.8-app1"

  tags = {
    Module = "lambda-at-edge"
  }
}

Lambda Function in VPC

module "lambda_function_in_vpc" {
  source = "terraform-aws-modules/lambda/aws"

  function_name = "my-lambda-in-vpc"
  description   = "My awesome lambda function"
  handler       = "index.lambda_handler"
  runtime       = "python3.8"

  source_path = "../fixtures/python3.8-app1"

  vpc_subnet_ids         = module.vpc.intra_subnets
  vpc_security_group_ids = [module.vpc.default_security_group_id]
  attach_network_policy = true
}

module "vpc" {
  source = "terraform-aws-modules/vpc/aws"

  name = "my-vpc"
  cidr = "10.10.0.0/16"

  # Specify at least one of: intra_subnets, private_subnets, or public_subnets
  azs           = ["eu-west-1a", "eu-west-1b", "eu-west-1c"]
  intra_subnets = ["10.10.101.0/24", "10.10.102.0/24", "10.10.103.0/24"]
}

Additional IAM policies for Lambda Functions

There are 6 supported ways to attach IAM policies to IAM role used by Lambda Function:

  1. policy_json - JSON string or heredoc, when attach_policy_json = true.
  2. policy_jsons - List of JSON strings or heredoc, when attach_policy_jsons = true and number_of_policy_jsons > 0.
  3. policy - ARN of existing IAM policy, when attach_policy = true.
  4. policies - List of ARNs of existing IAM policies, when attach_policies = true and number_of_policies > 0.
  5. policy_statements - Map of maps to define IAM statements which will be generated as IAM policy. Requires attach_policy_statements = true. See examples/complete for more information.
  6. assume_role_policy_statements - Map of maps to define IAM statements which will be generated as IAM policy for assuming Lambda Function role (trust relationship). See examples/complete for more information.

Lambda Permissions for allowed triggers

Lambda Permissions should be specified to allow certain resources to invoke Lambda Function.

module "lambda_function" {
  source = "terraform-aws-modules/lambda/aws"

  # ...omitted for brevity

  allowed_triggers = {
    APIGatewayAny = {
      service    = "apigateway"
      source_arn = "arn:aws:execute-api:eu-west-1:135367859851:aqnku8akd0/*/*/*"
    },
    APIGatewayDevPost = {
      service    = "apigateway"
      source_arn = "arn:aws:execute-api:eu-west-1:135367859851:aqnku8akd0/dev/POST/*"
    },
    OneRule = {
      principal  = "events.amazonaws.com"
      source_arn = "arn:aws:events:eu-west-1:135367859851:rule/RunDaily"
    }
  }
}

Conditional creation

Sometimes you need to have a way to create resources conditionally but Terraform does not allow usage of count inside module block, so the solution is to specify create arguments.

module "lambda" {
  source = "terraform-aws-modules/lambda/aws"

  create = false # to disable all resources

  create_package  = false  # to control build package process
  create_function = false  # to control creation of the Lambda Function and related resources
  create_layer    = false  # to control creation of the Lambda Layer and related resources
  create_role     = false  # to control creation of the IAM role and policies required for Lambda Function

  attach_cloudwatch_logs_policy = false
  attach_dead_letter_policy     = false
  attach_network_policy         = false
  attach_tracing_policy         = false
  attach_async_event_policy     = false

  # ... omitted
}

How does building and packaging work?

This is one of the most complicated part done by the module and normally you don't have to know internals.

package.py is Python script which does it. Make sure, Python 3.6 or newer is installed. The main functions of the script are to generate a filename of zip-archive based on the content of the files, verify if zip-archive has been already created, and create zip-archive only when it is necessary (during apply, not plan).

Hash of zip-archive created with the same content of the files is always identical which prevents unnecessary force-updates of the Lambda resources unless content modifies. If you need to have different filenames for the same content you can specify extra string argument hash_extra.

When calling this module multiple times in one execution to create packages with the same source_path, zip-archives will be corrupted due to concurrent writes into the same file. There are two solutions - set different values for hash_extra to create different archives, or create package once outside (using this module) and then pass local_existing_package argument to create other Lambda resources.

Debug

Building and packaging has been historically hard to debug (especially with Terraform), so we made an effort to make it easier for user to see debug info. There are 3 different debug levels: DEBUG - to see only what is happening during planning phase and how a zip file content filtering in case of applied patterns, DEBUG2 - to see more logging output, DEBUG3 - to see all logging values, DUMP_ENV - to see all logging values and env variables (be careful sharing your env variables as they may contain secrets!).

User can specify debug level like this:

export TF_LAMBDA_PACKAGE_LOG_LEVEL=DEBUG2
terraform apply

User can enable comments in heredoc strings in patterns which can be helpful in some situations. To do this set this environment variable:

export TF_LAMBDA_PACKAGE_PATTERN_COMMENTS=true
terraform apply

Build Dependencies

You can specify source_path in a variety of ways to achieve desired flexibility when building deployment packages locally or in Docker. You can use absolute or relative paths. If you have placed terraform files in subdirectories, note that relative paths are specified from the directory where terraform plan is run and not the location of your terraform file.

Note that, when building locally, files are not copying anywhere from the source directories when making packages, we use fast Python regular expressions to find matching files and directories, which makes packaging very fast and easy to understand.

Simple build from single directory

When source_path is set to a string, the content of that path will be used to create deployment package as-is:

source_path = "src/function1"

Static build from multiple source directories

When source_path is set to a list of directories the content of each will be taken and one archive will be created.

Combine various options for extreme flexibility

This is the most complete way of creating a deployment package from multiple sources with multiple dependencies. This example is showing some of the available options (see examples/build-package for more):

source_path = [
  "src/main-source",
  "src/another-source/index.py",
  {
    path     = "src/function1-dep",
    patterns = [
      "!.*/.*\\.txt", # Skip all txt files recursively
    ]
  }, {
    path             = "src/python3.8-app1",
    pip_requirements = true,
    prefix_in_zip    = "foo/bar1",
  }, {
    path             = "src/python3.8-app2",
    pip_requirements = "requirements-large.txt",
    patterns = [
      "!vendor/colorful-0.5.4.dist-info/RECORD",
      "!vendor/colorful-.+.dist-info/.*",
      "!vendor/colorful/__pycache__/?.*",
    ]
  }, {
    path     = "src/python3.8-app3",
    commands = [
      "npm install",
      ":zip"
    ],
    patterns = [
      "!.*/.*\\.txt",    # Skip all txt files recursively
      "node_modules/.+", # Include all node_modules
    ],
  }, {
    path     = "src/python3.8-app3",
    commands = ["go build"],
    patterns = <<END
      bin/.*
      abc/def/.*
    END
  }
]

Few notes:

  • All arguments except path are optional.
  • patterns - List of Python regex filenames should satisfy. Default value is "include everything" which is equal to patterns = [".*"]. This can also be specified as multiline heredoc string (no comments allowed). Some examples of valid patterns:
    !.*/.*\.txt        # Filter all txt files recursively
    node_modules/.*    # Include empty dir or with a content if it exists
    node_modules/.+    # Include full non empty node_modules dir with its content
    node_modules/      # Include node_modules itself without its content
                       # It's also a way to include an empty dir if it exists
    node_modules       # Include a file or an existing dir only

    !abc/.*            # Filter out everything in an abc folder
    abc/def/.*         # Re-include everything in abc/def sub folder
    !abc/def/hgk/.*    # Filter out again in abc/def/hgk sub folder
  • commands - List of commands to run. If specified, this argument overrides pip_requirements.
    • :zip [source] [destination] is a special command which creates content of current working directory (first argument) and places it inside of path (second argument).
  • pip_requirements - Controls whether to execute pip install. Set to false to disable this feature, true to run pip install with requirements.txt found in path. Or set to another filename which you want to use instead.
  • prefix_in_zip - If specified, will be used as a prefix inside zip-archive. By default, everything installs into the root of zip-archive.

Building in Docker

If your Lambda Function or Layer uses some dependencies you can build them in Docker and have them included into deployment package. Here is how you can do it:

build_in_docker   = true
docker_file       = "src/python3.8-app1/docker/Dockerfile"
docker_build_root = "src/python3.8-app1/docker"
docker_image      = "lambci/lambda:build-python3.8"
runtime           = "python3.8"    # Setting runtime is required when building package in Docker and Lambda Layer resource.

Using this module you can install dependencies from private hosts. To do this, you need for forward SSH agent:

docker_with_ssh_agent = true

Deployment package - Create or use existing

By default, this module creates deployment package and uses it to create or update Lambda Function or Lambda Layer.

Sometimes, you may want to separate build of deployment package (eg, to compile and install dependencies) from the deployment of a package into two separate steps.

When creating archive locally outside of this module you need to set create_package = false and then argument local_existing_package = "existing_package.zip". Alternatively, you may prefer to keep your deployment packages into S3 bucket and provide a reference to them like this:

create_package      = false
  s3_existing_package = {
    bucket = "my-bucket-with-lambda-builds"
    key    = "existing_package.zip"
  }

Using deployment package from remote URL

This can be implemented in two steps: download file locally using CURL, and pass path to deployment package as local_existing_package argument.

locals {
  package_url = "https://raw.githubusercontent.com/terraform-aws-modules/terraform-aws-lambda/master/examples/fixtures/python3.8-zip/existing_package.zip"
  downloaded  = "downloaded_package_${md5(local.package_url)}.zip"
}

resource "null_resource" "download_package" {
  triggers = {
    downloaded = local.downloaded
  }

  provisioner "local-exec" {
    command = "curl -L -o ${local.downloaded} ${local.package_url}"
  }
}

data "null_data_source" "downloaded_package" {
  inputs = {
    id       = null_resource.download_package.id
    filename = local.downloaded
  }
}

module "lambda_function_existing_package_from_remote_url" {
  source = "terraform-aws-modules/lambda/aws"

  function_name = "my-lambda-existing-package-local"
  description   = "My awesome lambda function"
  handler       = "index.lambda_handler"
  runtime       = "python3.8"

  create_package         = false
  local_existing_package = data.null_data_source.downloaded_package.outputs["filename"]
}

How to deploy and manage Lambda Functions?

Simple deployments

Typically, Lambda Function resource updates when source code changes. If publish = true is specified a new Lambda Function version will also be created.

Published Lambda Function can be invoked using either by version number or using $LATEST. This is the simplest way of deployment which does not required any additional tool or service.

Controlled deployments (rolling, canary, rollbacks)

In order to do controlled deployments (rolling, canary, rollbacks) of Lambda Functions we need to use Lambda Function aliases.

In simple terms, Lambda alias is like a pointer to either one version of Lambda Function (when deployment complete), or to two weighted versions of Lambda Function (during rolling or canary deployment).

One Lambda Function can be used in multiple aliases. Using aliases gives large control of which version deployed when having multiple environments.

There is alias module, which simplifies working with alias (create, manage configurations, updates, etc). See examples/alias for various use-cases how aliases can be configured and used.

There is deploy module, which creates required resources to do deployments using AWS CodeDeploy. It also creates the deployment, and wait for completion. See examples/deploy for complete end-to-end build/update/deploy process.

Terraform CI/CD

Terraform Cloud, Terraform Enterprise, and many other SaaS for running Terraform do not have Python pre-installed on the workers. You will need to provide an alternative Docker image with Python installed to be able to use this module there.

FAQ

Q1: Why deployment package not recreating every time I change something? Or why deployment package is being recreated every time but content has not been changed?

Answer: There can be several reasons related to concurrent executions, or to content hash. Sometimes, changes has happened inside of dependency which is not used in calculating content hash. Or multiple packages are creating at the same time from the same sources. You can force it by setting value of hash_extra to distinct values.

Q2: How to force recreate deployment package?

Answer: Delete an existing zip-archive from builds directory, or make a change in your source code. If there is no zip-archive for the current content hash, it will be recreated during terraform apply.

Q3: null_resource.archive[0] must be replaced

Answer: This probably mean that zip-archive has been deployed, but is currently absent locally, and it has to be recreated locally. When you run into this issue during CI/CD process (where workspace is clean) or from multiple workspaces, you can set environment variable TF_RECREATE_MISSING_LAMBDA_PACKAGE=false or pass recreate_missing_package = false as a parameter to the module and run terraform apply.

Q4: What does this error mean - "We currently do not support adding policies for $LATEST." ?

Answer: When the Lambda function is created with publish = true the new version is automatically increased and a qualified identifier (version number) becomes available and will be used when setting Lambda permissions.

When publish = false (default), only unqualified identifier ($LATEST) is available which leads to the error.

The solution is to either disable the creation of Lambda permissions for the current version by setting create_current_version_allowed_triggers = false, or to enable publish of Lambda function (publish = true).

Notes

  1. Creation of Lambda Functions and Lambda Layers is very similar and both support the same features (building from source path, using existing package, storing package locally or on S3)
  2. Check out this Awesome list of AWS Lambda Layers

Examples

  • Complete - Create Lambda resources in various combinations with all supported features.
  • Container Image - Create Docker image (using docker provider), push it to AWS ECR, and create Lambda function from it.
  • Build and Package - Build and create deployment packages in various ways.
  • Alias - Create static and dynamic aliases in various ways.
  • Deploy - Complete end-to-end build/update/deploy process using AWS CodeDeploy.
  • Async Invocations - Create Lambda Function with async event configuration (with SQS, SNS, and EventBridge integration).
  • With VPC - Create Lambda Function with VPC.
  • With EFS - Create Lambda Function with Elastic File System attached (Terraform 0.13+ is recommended).
  • Multiple regions - Create the same Lambda Function in multiple regions with non-conflicting IAM roles and policies.
  • Event Source Mapping - Create Lambda Function with event source mapping configuration (SQS, DynamoDB, Amazon MQ, and Kinesis).
  • Triggers - Create Lambda Function with some triggers (eg, Cloudwatch Events, EventBridge).

Examples by the users of this module

Requirements

Name Version
terraform >= 0.12.31
aws >= 3.61
external >= 1
local >= 1
null >= 2

Providers

Name Version
aws >= 3.61
external >= 1
local >= 1
null >= 2

Modules

No modules.

Resources

Name Type
aws_cloudwatch_log_group.lambda resource
aws_iam_policy.additional_inline resource
aws_iam_policy.additional_json resource
aws_iam_policy.additional_jsons resource
aws_iam_policy.async resource
aws_iam_policy.dead_letter resource
aws_iam_policy.logs resource
aws_iam_policy.tracing resource
aws_iam_policy.vpc resource
aws_iam_role.lambda resource
aws_iam_role_policy_attachment.additional_inline resource
aws_iam_role_policy_attachment.additional_json resource
aws_iam_role_policy_attachment.additional_jsons resource
aws_iam_role_policy_attachment.additional_many resource
aws_iam_role_policy_attachment.additional_one resource
aws_iam_role_policy_attachment.async resource
aws_iam_role_policy_attachment.dead_letter resource
aws_iam_role_policy_attachment.logs resource
aws_iam_role_policy_attachment.tracing resource
aws_iam_role_policy_attachment.vpc resource
aws_lambda_event_source_mapping.this resource
aws_lambda_function.this resource
aws_lambda_function_event_invoke_config.this resource
aws_lambda_layer_version.this resource
aws_lambda_permission.current_version_triggers resource
aws_lambda_permission.unqualified_alias_triggers resource
aws_lambda_provisioned_concurrency_config.current_version resource
aws_s3_bucket_object.lambda_package resource
local_file.archive_plan resource
null_resource.archive resource
aws_arn.log_group_arn data source
aws_cloudwatch_log_group.lambda data source
aws_iam_policy.tracing data source
aws_iam_policy.vpc data source
aws_iam_policy_document.additional_inline data source
aws_iam_policy_document.assume_role data source
aws_iam_policy_document.async data source
aws_iam_policy_document.dead_letter data source
aws_iam_policy_document.logs data source
aws_partition.current data source
external_external.archive_prepare data source

Inputs

Name Description Type Default Required
allowed_triggers Map of allowed triggers to create Lambda permissions map(any) {} no
architectures Instruction set architecture for your Lambda function. Valid values are ["x86_64"] and ["arm64"]. list(string) null no
artifacts_dir Directory name where artifacts should be stored string "builds" no
assume_role_policy_statements Map of dynamic policy statements for assuming Lambda Function role (trust relationship) any {} no
attach_async_event_policy Controls whether async event policy should be added to IAM role for Lambda Function bool false no
attach_cloudwatch_logs_policy Controls whether CloudWatch Logs policy should be added to IAM role for Lambda Function bool true no
attach_dead_letter_policy Controls whether SNS/SQS dead letter notification policy should be added to IAM role for Lambda Function bool false no
attach_network_policy Controls whether VPC/network policy should be added to IAM role for Lambda Function bool false no
attach_policies Controls whether list of policies should be added to IAM role for Lambda Function bool false no
attach_policy Controls whether policy should be added to IAM role for Lambda Function bool false no
attach_policy_json Controls whether policy_json should be added to IAM role for Lambda Function bool false no
attach_policy_jsons Controls whether policy_jsons should be added to IAM role for Lambda Function bool false no
attach_policy_statements Controls whether policy_statements should be added to IAM role for Lambda Function bool false no
attach_tracing_policy Controls whether X-Ray tracing policy should be added to IAM role for Lambda Function bool false no
build_in_docker Whether to build dependencies in Docker bool false no
cloudwatch_logs_kms_key_id The ARN of the KMS Key to use when encrypting log data. string null no
cloudwatch_logs_retention_in_days Specifies the number of days you want to retain log events in the specified log group. Possible values are: 1, 3, 5, 7, 14, 30, 60, 90, 120, 150, 180, 365, 400, 545, 731, 1827, and 3653. number null no
cloudwatch_logs_tags A map of tags to assign to the resource. map(string) {} no
compatible_architectures A list of Architectures Lambda layer is compatible with. Currently x86_64 and arm64 can be specified. list(string) null no
compatible_runtimes A list of Runtimes this layer is compatible with. Up to 5 runtimes can be specified. list(string) [] no
create Controls whether resources should be created bool true no
create_async_event_config Controls whether async event configuration for Lambda Function/Alias should be created bool false no
create_current_version_allowed_triggers Whether to allow triggers on current version of Lambda Function (this will revoke permissions from previous version because Terraform manages only current resources) bool true no
create_current_version_async_event_config Whether to allow async event configuration on current version of Lambda Function (this will revoke permissions from previous version because Terraform manages only current resources) bool true no
create_function Controls whether Lambda Function resource should be created bool true no
create_layer Controls whether Lambda Layer resource should be created bool false no
create_package Controls whether Lambda package should be created bool true no
create_role Controls whether IAM role for Lambda Function should be created bool true no
create_unqualified_alias_allowed_triggers Whether to allow triggers on unqualified alias pointing to $LATEST version bool true no
create_unqualified_alias_async_event_config Whether to allow async event configuration on unqualified alias pointing to $LATEST version bool true no
dead_letter_target_arn The ARN of an SNS topic or SQS queue to notify when an invocation fails. string null no
description Description of your Lambda Function (or Layer) string "" no
destination_on_failure Amazon Resource Name (ARN) of the destination resource for failed asynchronous invocations string null no
destination_on_success Amazon Resource Name (ARN) of the destination resource for successful asynchronous invocations string null no
docker_build_root Root dir where to build in Docker string "" no
docker_file Path to a Dockerfile when building in Docker string "" no
docker_image Docker image to use for the build string "" no
docker_pip_cache Whether to mount a shared pip cache folder into docker environment or not any null no
docker_with_ssh_agent Whether to pass SSH_AUTH_SOCK into docker environment or not bool false no
environment_variables A map that defines environment variables for the Lambda Function. map(string) {} no
event_source_mapping Map of event source mapping any {} no
file_system_arn The Amazon Resource Name (ARN) of the Amazon EFS Access Point that provides access to the file system. string null no
file_system_local_mount_path The path where the function can access the file system, starting with /mnt/. string null no
function_name A unique name for your Lambda Function string "" no
handler Lambda Function entrypoint in your code string "" no
hash_extra The string to add into hashing function. Useful when building same source path for different functions. string "" no
ignore_source_code_hash Whether to ignore changes to the function's source code hash. Set to true if you manage infrastructure and code deployments separately. bool false no
image_config_command The CMD for the docker image list(string) [] no
image_config_entry_point The ENTRYPOINT for the docker image list(string) [] no
image_config_working_directory The working directory for the docker image string null no
image_uri The ECR image URI containing the function's deployment package. string null no
kms_key_arn The ARN of KMS key to use by your Lambda Function string null no
lambda_at_edge Set this to true if using Lambda@Edge, to enable publishing, limit the timeout, and allow edgelambda.amazonaws.com to invoke the function bool false no
lambda_role IAM role ARN attached to the Lambda Function. This governs both who / what can invoke your Lambda Function, as well as what resources our Lambda Function has access to. See Lambda Permission Model for more details. string "" no
layer_name Name of Lambda Layer to create string "" no
layers List of Lambda Layer Version ARNs (maximum of 5) to attach to your Lambda Function. list(string) null no
license_info License info for your Lambda Layer. Eg, MIT or full url of a license. string "" no
local_existing_package The absolute path to an existing zip-file to use string null no
maximum_event_age_in_seconds Maximum age of a request that Lambda sends to a function for processing in seconds. Valid values between 60 and 21600. number null no
maximum_retry_attempts Maximum number of times to retry when the function returns an error. Valid values between 0 and 2. Defaults to 2. number null no
memory_size Amount of memory in MB your Lambda Function can use at runtime. Valid value between 128 MB to 10,240 MB (10 GB), in 64 MB increments. number 128 no
number_of_policies Number of policies to attach to IAM role for Lambda Function number 0 no
number_of_policy_jsons Number of policies JSON to attach to IAM role for Lambda Function number 0 no
package_type The Lambda deployment package type. Valid options: Zip or Image string "Zip" no
policies List of policy statements ARN to attach to Lambda Function role list(string) [] no
policy An additional policy document ARN to attach to the Lambda Function role string null no
policy_json An additional policy document as JSON to attach to the Lambda Function role string null no
policy_jsons List of additional policy documents as JSON to attach to Lambda Function role list(string) [] no
policy_path Path of policies to that should be added to IAM role for Lambda Function string null no
policy_statements Map of dynamic policy statements to attach to Lambda Function role any {} no
provisioned_concurrent_executions Amount of capacity to allocate. Set to 1 or greater to enable, or set to 0 to disable provisioned concurrency. number -1 no
publish Whether to publish creation/change as new Lambda Function Version. bool false no
recreate_missing_package Whether to recreate missing Lambda package if it is missing locally or not bool true no
reserved_concurrent_executions The amount of reserved concurrent executions for this Lambda Function. A value of 0 disables Lambda Function from being triggered and -1 removes any concurrency limitations. Defaults to Unreserved Concurrency Limits -1. number -1 no
role_description Description of IAM role to use for Lambda Function string null no
role_force_detach_policies Specifies to force detaching any policies the IAM role has before destroying it. bool true no
role_name Name of IAM role to use for Lambda Function string null no
role_path Path of IAM role to use for Lambda Function string null no
role_permissions_boundary The ARN of the policy that is used to set the permissions boundary for the IAM role used by Lambda Function string null no
role_tags A map of tags to assign to IAM role map(string) {} no
runtime Lambda Function runtime string "" no
s3_acl The canned ACL to apply. Valid values are private, public-read, public-read-write, aws-exec-read, authenticated-read, bucket-owner-read, and bucket-owner-full-control. Defaults to private. string "private" no
s3_bucket S3 bucket to store artifacts string null no
s3_existing_package The S3 bucket object with keys bucket, key, version pointing to an existing zip-file to use map(string) null no
s3_object_storage_class Specifies the desired Storage Class for the artifact uploaded to S3. Can be either STANDARD, REDUCED_REDUNDANCY, ONEZONE_IA, INTELLIGENT_TIERING, or STANDARD_IA. string "ONEZONE_IA" no
s3_object_tags A map of tags to assign to S3 bucket object. map(string) {} no
s3_prefix Directory name where artifacts should be stored in the S3 bucket. If unset, the path from artifacts_dir is used string null no
s3_server_side_encryption Specifies server-side encryption of the object in S3. Valid values are "AES256" and "aws:kms". string null no
source_path The absolute path to a local file or directory containing your Lambda source code any null no
store_on_s3 Whether to store produced artifacts on S3 or locally. bool false no
tags A map of tags to assign to resources. map(string) {} no
timeout The amount of time your Lambda Function has to run in seconds. number 3 no
tracing_mode Tracing mode of the Lambda Function. Valid value can be either PassThrough or Active. string null no
trusted_entities List of additional trusted entities for assuming Lambda Function role (trust relationship) any [] no
use_existing_cloudwatch_log_group Whether to use an existing CloudWatch log group or create new bool false no
vpc_security_group_ids List of security group ids when Lambda Function should run in the VPC. list(string) null no
vpc_subnet_ids List of subnet ids when Lambda Function should run in the VPC. Usually private or intra subnets. list(string) null no

Outputs

Name Description
lambda_cloudwatch_log_group_arn The ARN of the Cloudwatch Log Group
lambda_cloudwatch_log_group_name The name of the Cloudwatch Log Group
lambda_event_source_mapping_function_arn The the ARN of the Lambda function the event source mapping is sending events to
lambda_event_source_mapping_state The state of the event source mapping
lambda_event_source_mapping_state_transition_reason The reason the event source mapping is in its current state
lambda_event_source_mapping_uuid The UUID of the created event source mapping
lambda_function_arn The ARN of the Lambda Function
lambda_function_invoke_arn The Invoke ARN of the Lambda Function
lambda_function_kms_key_arn The ARN for the KMS encryption key of Lambda Function
lambda_function_last_modified The date Lambda Function resource was last modified
lambda_function_name The name of the Lambda Function
lambda_function_qualified_arn The ARN identifying your Lambda Function Version
lambda_function_source_code_hash Base64-encoded representation of raw SHA-256 sum of the zip file
lambda_function_source_code_size The size in bytes of the function .zip file
lambda_function_version Latest published version of Lambda Function
lambda_layer_arn The ARN of the Lambda Layer with version
lambda_layer_created_date The date Lambda Layer resource was created
lambda_layer_layer_arn The ARN of the Lambda Layer without version
lambda_layer_source_code_size The size in bytes of the Lambda Layer .zip file
lambda_layer_version The Lambda Layer version
lambda_role_arn The ARN of the IAM role created for the Lambda Function
lambda_role_name The name of the IAM role created for the Lambda Function
lambda_role_unique_id The unique id of the IAM role created for the Lambda Function
local_filename The filename of zip archive deployed (if deployment was from local)
s3_object The map with S3 object data of zip archive deployed (if deployment was from S3)

Authors

Module managed by Anton Babenko. Check out serverless.tf to learn more about doing serverless with Terraform.

Please reach out to Betajob if you are looking for commercial support for your Terraform, AWS, or serverless project.

License

Apache 2 Licensed. See LICENSE for full details.

  • 昨日刚刚体验了 Terraform 是一个什么鬼东西 Terraform 使用 - 从最简单例子开始,今天再进一步。将来尝试的是使用 Terraform 来部署一个 Lambda 应用,并创建相关的资源。 本例中的 Lambda 要由 Kinesis 来触发,并写数据到 S3 Bucket 中去,所以需要做的事情大致如下: 创建 IAM Role, 该 Role 要能访问 S3, Kinesis

  • 使用 Python 书写 AWS Lambda 的一个好处就是能够在控制台中直接编辑源代码,非常方便进行快速验证测试 AWS 环境相关的。这只限于使用 AWS 为 Python Lambda 运行时提供的默认组件(比如 boto3),尚若需要在自己的 Python Lambda 中使用其他的组件(如 redis), 就不得不把自己的代码及依赖打成一个 zip 包再部署,这时候就无法在控制台直接编辑

  • 当你使用Lambda控制台创建一个Node.js的Lambda函数的时候。Lambda会自动为该函数创建默认代码。 使用控制台创建一个Lambda函数的步骤: 1.打开Lambda控制台的函数页面。 2.选择创建函数。 3.在基本信息下,执行以下操作。 对于函数名称,输入my-function。 对于运行时间,确认选择Node.js 14.x。我们也为Lambda准备了.NET(PowerShe

 相关资料
  • Terraform AWS frontend module Collection of Terraform modules for frontend app deployment on AWS. List of submodules Frontend app Maintainers Bartłomiej Wójtowicz (@qbart) Łukasz Pawlik (@LukeP91) LIC

  • Terraform Provider for AWS Website: terraform.io Tutorials: learn.hashicorp.com Forum: discuss.hashicorp.com Chat: gitter Mailing List: Google Groups The Terraform AWS provider is a plugin for Terrafo

  • AWS VPC Terraform module Terraform module which creates VPC resources on AWS. Usage module "vpc" { source = "terraform-aws-modules/vpc/aws" name = "my-vpc" cidr = "10.0.0.0/16" azs = [

  • Mastodon on AWS with Terraform Terraform module for mastodon service deploy Will deploy an ec2 instance with mastodon and run the service. Requirements AWS account EC2 domain with Route53 Terraform Us

  • AWS RDS Terraform module Terraform module which creates RDS resources on AWS. Root module calls these modules which can also be used separately to create independent resources: db_instance - creates R

  • AWS Identity and Access Management (IAM) Terraform module Features Cross-account access. Define IAM roles using iam_assumable_role or iam_assumable_roles submodules in "resource AWS accounts (prod, st