Serverless security with AWS Lambda & CircleCI
Senior Software Engineer

Serverless computing helps developers build applications without managing servers. AWS Lambda makes this easy, but enforcing security is vital in preventing data breaches and protecting sensitive information from being accessed by unauthorized individuals.
Instead of manually setting up cloud resources, you can use Terraform, an infrastructure-as-code tool that defines and manages your infrastructure with declarative configuration files. It simplifies provisioning by using clear, consistent syntax, making it easy to version and reuse code. Terraform integrates well with development tools, allowing you to manage infrastructure alongside application code, and it supports CI/CD pipelines for automated, repeatable deployments.
In this tutorial, you will learn how to build, secure, and continuously deploy a serverless function using AWS Lambda, DynamoDB, and CircleCI. You will apply best practices like IAM least privilege, input validation, XSS protection, and Infrastructure as Code using Terraform.
Prerequisites
In this tutorial, you will create an AWS Lambda handler using Node.js and define your infrastructure with Terraform. You must install the AWS CLI to configure your credentials and set up the Terraform CLI to manage your infrastructure changes.
- Install NodeJS and NPM
- Get a GitHub account
- Create an AWS account
- Install the AWS CLI
- Create a CircleCI account
- Install Terraform CLI
- Node.js (v20+)
- Install Postman
- Get a free Snyk account using your GitHub credentials
Constructing the project
Before writing any code, create the required directories and files to match this project structure:
.
└── serverless-security-lambda-circleci
├── infrastructure
│ ├── lambda
│ │ ├── main.tf
│ │ └── variables.tf
│ ├── terraform.tf
│ ├── terraform.tfvars
│ └── variables.tf
├── package.json
└── src
└── lambda
└── index.js
If you’re on macOS or Linux, run these commands from your terminal:
mkdir -p serverless-security-lambda-circleci/infrastructure/lambda
mkdir -p serverless-security-lambda-circleci/src/lambda
touch serverless-security-lambda-circleci/package.json
touch serverless-security-lambda-circleci/infrastructure/lambda/main.tf
touch serverless-security-lambda-circleci/infrastructure/lambda/variables.tf
touch serverless-security-lambda-circleci/infrastructure/terraform.tf
touch serverless-security-lambda-circleci/infrastructure/terraform.tfvars
touch serverless-security-lambda-circleci/infrastructure/variables.tf
touch serverless-security-lambda-circleci/src/lambda/index.js
On Windows, use PowerShell to create the structure with these commands:
New-Item -ItemType Directory -Path "serverless-security-lambda-circleci\infrastructure\lambda" -Force
New-Item -ItemType Directory -Path "serverless-security-lambda-circleci\src\lambda" -Force
New-Item -ItemType File -Path "serverless-security-lambda-circleci\package.json" -Force
New-Item -ItemType File -Path "serverless-security-lambda-circleci\infrastructure\lambda\main.tf" -Force
New-Item -ItemType File -Path "serverless-security-lambda-circleci\infrastructure\lambda\variables.tf" -Force
New-Item -ItemType File -Path "serverless-security-lambda-circleci\infrastructure\terraform.tf" -Force
New-Item -ItemType File -Path "serverless-security-lambda-circleci\infrastructure\terraform.tfvars" -Force
New-Item -ItemType File -Path "serverless-security-lambda-circleci\infrastructure\variables.tf" -Force
New-Item -ItemType File -Path "serverless-security-lambda-circleci\src\lambda\index.js" -Force
The commands create the necessary directories and Terraform configuration files—for example, main.tf
, variables.tf
, and terraform.tfvars
. These files will define your infrastructure in Terraform using the HashiCorp Configuration Language (HCL). Throughout this tutorial, you will explore how these files are structured and how they contribute to building and deploying a secure, serverless application.
Begin by navigating into the directory:
cd serverless-security-lambda-circleci
Add a NodeJS Lambda function
In this section, you will define a function using Node.js to perform CRUD (Create, Read, Update, Delete) operations for managing real estate listings stored in AWS DynamoDB.
Modify the existing lambda
directory inside the src
folder by updating the contents of the preexisting index.js
file with your handler logic, and adjust the existing package.json
file to include any required dependencies.
In the package.json
file, replace its contents with the ones below to define the name of the Node.js project and add the dependencies that will be used by your handler.
{
"name": "serverless-security",
"version": "1.0.0",
"description": "",
"main": "index.js",
"author": "",
"license": "ISC",
"dependencies": {
"@aws-sdk/client-dynamodb": "^3.767.0",
"@aws-sdk/lib-dynamodb": "^3.767.0",
"dotenv": "^16.4.7",
"express": "^4.21.2",
"helmet": "^8.0.0",
"serverless-http": "^3.2.0",
"xss": "^1.0.15"
},
"devDependencies": {
"@types/express": "^5.0.1"
}
}
The AWS DynamoDB database will enable us to securely manage property listings by allowing us to add, retrieve, update, and delete entries. To protect your data, implement input validation and request sanitization to prevent SQL and NoSQL injection attacks. Additionally, utilize AWS CloudWatch for logging requests and monitoring activity. The secure API is built using Express + serverless-http for effective routing, Helmet to enforce HTTP security headers, the XSS module for thorough input sanitization, and both @aws-sdk/client-dynamodb
and lib-dynamodb
for interacting with DynamoDB.
After you add the dependencies, run this command to install the Node.js packages:
npm install
Implementing secure CRUD operations
In this section, you will build CRUD operations (Create, Read, Update, Delete) to manage real estate listings.
Modify the existing src/lambda/index.js
file and add the main content for the AWS Lambda function.
const express = require("express");
const helmet = require("helmet");
const { DynamoDBClient } = require("@aws-sdk/client-dynamodb");
const {
DynamoDBDocumentClient,
GetCommand,
PutCommand,
UpdateCommand,
DeleteCommand,
ScanCommand
} = require("@aws-sdk/lib-dynamodb");
const dotenv = require("dotenv");
const serverless = require("serverless-http");
const xss = require("xss");
dotenv.config();
const app = express();
app.use(express.json());
app.use(helmet());
const dbClient = new DynamoDBClient({ region: process.env.AWS_REGION });
const docClient = DynamoDBDocumentClient.from(dbClient);
let TABLE_NAME = "RealEstateListings";
Sanitizing input
Before any data is persisted, you run a custom validateProperty()
function to ensure all fields are correctly structured. You also use the xss
module inside sanitizeInput()
to clean any malicious input, helping to defend against injection and XSS attacks.
function validateProperty(data) {
if (!data.PropertyID || typeof data.PropertyID !== "string") return "Invalid PropertyID";
if (!data.Title || typeof data.Title !== "string") return "Invalid Title";
if (!data.Description || typeof data.Description !== "string") return "Invalid Description";
if (!["Rent", "Sale"].includes(data.PropertyType)) return "Invalid PropertyType (Must be 'Rent' or 'Sale')";
if (typeof data.Price !== "number" || data.Price < 0) return "Invalid Price";
if (!data.PropertyLocation || typeof data.PropertyLocation !== "string") return "Invalid PropertyLocation";
return null;
}
function sanitizeInput(data) {
return {
PropertyID: xss(data.PropertyID),
Title: xss(data.Title),
Description: xss(data.Description),
PropertyType: xss(data.PropertyType),
Price: data.Price,
PropertyLocation: xss(data.PropertyLocation)
};
}
HTTP verbs and CRUD operations
In your AWS Lambda, HTTP verbs map naturally to CRUD operations: POST is used to create a new property listing (e.g., adding a house for sale), GET is used to read or retrieve property data (such as listing details or search results), PUT or PATCH is used to update an existing property (like modifying the price or description), and DELETE is used to remove a property from the system (such as taking down a sold or expired listing).
POST /property
This route manages the creation of properties. It first validates and sanitizes the request body, then stores the new item in DynamoDB using the PutCommand
. A conditional expression prevents existing entries from being overwritten.
app.post("/property", async (req, res) => {
const error = validateProperty(req.body);
if (error) return res.status(400).json({ error });
const sanitized = sanitizeInput(req.body);
try {
const params = new PutCommand({
TableName: TABLE_NAME,
Item: sanitized,
ConditionExpression: "attribute_not_exists(PropertyID)"
});
await docClient.send(params);
res.status(201).json({ message: "Property created successfully" });
} catch (err) {
res.status(500).json({ error: "Error creating property", details: err.message });
}
});
GET /property/:id
This endpoint retrieves a single property by its PropertyID
. If the item is not found in DynamoDB, a 404 error is returned. Otherwise, it responds with the property data.
app.get("/property/:id", async (req, res) => {
try {
const params = new GetCommand({
TableName: TABLE_NAME,
Key: { PropertyID: req.params.id }
});
const data = await docClient.send(params);
if (!data.Item) return res.status(404).json({ error: "Property not found" });
res.status(200).json(data.Item);
} catch (err) {
res.status(500).json({ error: "Error fetching property", details: err.message });
}
});
PUT /property/:id
This route is used to update an existing property. It requires the PropertyID
and updated fields. It uses UpdateCommand
with proper expression attributes. The update is aborted with a condition expression if the property doesn’t exist.
app.put("/property/:id", async (req, res) => {
const sanitized = sanitizeInput(req.body);
const { Title, Description, PropertyType, Price, PropertyLocation } = sanitized;
if (!Title && !Description && !PropertyType && !Price && !PropertyLocation)
return res.status(400).json({ error: "No fields to update" });
try {
const params = new UpdateCommand({
TableName: TABLE_NAME,
Key: { PropertyID: req.params.id },
UpdateExpression: "SET Title = :t, Description = :d, PropertyType = :ty, Price = :p, PropertyLocation = :l",
ExpressionAttributeValues: {
":t": Title,
":d": Description,
":ty": PropertyType,
":p": Price,
":l": PropertyLocation
},
ConditionExpression: "attribute_exists(PropertyID)"
});
await docClient.send(params);
res.status(200).json({ message: "Property updated successfully" });
} catch (err) {
res.status(500).json({ error: "Error updating property", details: err.message });
}
});
DELETE /property/:id
This route uses DeleteCommand
to remove a listing, targeting the property by ID. A condition expression ensures the property exists before deletion, avoiding unintended errors.
app.delete("/property/:id", async (req, res) => {
try {
const params = new DeleteCommand({
TableName: TABLE_NAME,
Key: { PropertyID: req.params.id },
ConditionExpression: "attribute_exists(PropertyID)"
});
await docClient.send(params);
res.status(200).json({ message: "Property deleted successfully" });
} catch (err) {
res.status(500).json({ error: "Error deleting property", details: err.message });
}
});
GET /properties
This endpoint returns a paginated list of all properties using ScanCommand
. You can later enhance this with filtering, pagination tokens, and indexing for scalability.
app.get("/properties", async (req, res) => {
try {
const params = new ScanCommand({
TableName: TABLE_NAME,
Limit: 10
});
const data = await docClient.send(params);
res.status(200).json(data.Items);
} catch (err) {
res.status(500).json({ error: "Error fetching properties", details: err.message });
}
});
Start the server
Finally, the Express app is bound to the serverless
wrapper for deployment on AWS Lambda. Locally, it listens on a port defined by process.env.PORT
or defaults to 3000 for testing.
const PORT = process.env.PORT || 3000;
app.listen(PORT, () => console.log(`Server running on port ${PORT}`));
module.exports.handler = serverless(app);
Implementing secure deployment with CircleCI & AWS
Once the security scan is complete, you must deploy your serverless function. You will use CircleCI for continuous integration and delivery and manage your infrastructure with Terraform.
In thisproject, Terraform provisions a Lambda function (aws_lambda_function
) and a DynamoDB table (aws_dynamodb_table
). IAM roles (aws_iam_role
) are defined with policies that strictly control what actions the Lambda function is allowed to perform—specifically granting permission to read and write to the DynamoDB table. This ensures least privilege access and isolates resources for better security.
You will also configure your CircleCI pipeline to:
- Authenticate with AWS using environment variables or contexts
- Run
terraform plan
andterraform apply
only after tests and security checks pass - Allow rollbacks by using versioned Lambda deployments (e.g., via
publish = true
inaws_lambda_function
)
The next diagram illustrates how IAM roles are used to securely bind the Lambda function with DynamoDB and restrict access.
This setup lets you build a secure, automated deployment workflow while minimizing risk through controlled access and rollback capability.
Provision AWS resources with Terraform
You must first create an S3 bucket for your Terraform state to have a safe, central place to store and track your infrastructure changes. This is especially helpful when working with a team.
Now, create a S3 bucket named “tf-state-bucket” for the Terraform backend state:
Configure terraform backend and provider
Go to the infrastructure/terraform.tf
file to configure the S3 backend and AWS provider:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0"
}
}
backend "s3" {
bucket = "tf-state-bucket-20250330-2000"
key = "terraform/state.tfstate"
region = "us-east-1"
use_lockfile = true
}
}
# Creating AWS Provider
provider "aws" {
region = "us-east-1"
access_key = var.terraform_aws_access_key
secret_key = var.terraform_aws_secret_key
}
#Call module lambda Terraform
module "lambda" {
source = ".//lambda"
terraform_aws_access_key = var.terraform_aws_access_key
terraform_aws_secret_key = var.terraform_aws_secret_key
}
Retrieve your AWS credentials
In this step, you need to configure AWS credentials, then add your AWS Access Key and AWS Secret Key securely via terraform.tfvars
:
terraform_aws_access_key = "your-access-key"
terraform_aws_secret_key = "your-secret-key"
To avoid inadvertently versioning these secrets, add these lines to your .gitignore file:
*.tfvars
Now modify the infrastructure/lambda/variables.tf
file and configure the resources needed for the AWS Lambda:
variable "terraform_aws_access_key" {
type = string
description = "AWS terraform access key"
sensitive = true
}
variable "terraform_aws_secret_key" {
type = string
description = "AWS terraform secret key"
sensitive = true
}
Creating the main Terraform configuration
In this section, you configure the main terraform file and add the AWS resources needed for the AWS Lambda.
Start by going to the infrastructure/lambda/main.tf
file and adding each of these subsections:
AWS provider configuration
provider "aws" {
region = "us-east-1"
access_key = var.terraform_aws_access_key
secret_key = var.terraform_aws_secret_key
}
This section sets up the AWS provider, specifying the region and credentials Terraform will use to deploy resources. It’s essential to connect your Terraform configuration to your AWS account.
DynamoDB table resource
resource "aws_dynamodb_table" "real_estate" {
name = "RealEstateListings"
billing_mode = "PAY_PER_REQUEST"
hash_key = "PropertyID"
attribute {
name = "PropertyID"
type = "S"
}
tags = {
Name = "RealEstateListings"
Environment = "Production"
}
}
Here, you define a DynamoDB table named RealEstateListings
, using on-demand billing (PAY_PER_REQUEST
). The table uses PropertyID
as the primary key and includes tags for better resource management.
Lambda IAM role and basic execution policy
AWS Lambda needs permissions to interact with DynamoDB, which you will define using IAM roles.
Following the Principle of Least Privilege, you will create an IAM role that gives only the necessary access to the AWS Lambda.
resource "aws_iam_role" "lambda_exec" {
name = "serverless_real_estate_lambda"
assume_role_policy = jsonencode({
Version = "2012-10-17",
Statement = [{
Action = "sts:AssumeRole",
Effect = "Allow",
Sid = "",
Principal = {
Service = "lambda.amazonaws.com"
}
}]
})
}
This IAM role allows AWS Lambda to assume the permissions needed to execute and interact with other AWS services. It uses a trusting relationship with Lambda service.
resource "aws_iam_role_policy_attachment" "lambda_role_attachment" {
policy_arn = "arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole"
role = aws_iam_role.lambda_exec.name
}
This attaches a basic execution policy to the Lambda role, enabling logging to Amazon CloudWatch.
API gateway setup
The block below defines an HTTP API Gateway that will serve as the entry point for your Lambda-based REST API:
resource "aws_apigatewayv2_api" "lambda" {
name = "serverless_lambda_gw"
protocol_type = "HTTP"
}
Then, the next segment creates a deployment stage for the API Gateway, with automatic deployment of new changes.
resource "aws_apigatewayv2_stage" "lambda" {
api_id = aws_apigatewayv2_api.lambda.id
name = "serverless_lambda_stage"
auto_deploy = true
}
Lambda integration and routing
Now, the aws_apigatewayv2_integration
resource creates an integration between the API Gateway and the Lambda function using the AWS_PROXY type, allowing full event forwarding:
resource "aws_apigatewayv2_integration" "real_estate_lambda" {
api_id = aws_apigatewayv2_api.lambda.id
integration_uri = aws_lambda_function.real_estate_lambda.invoke_arn
integration_type = "AWS_PROXY"
integration_method = "POST"
}
Then, aws_apigatewayv2_route
resource maps HTTP requests with the /property/{id}
route to the Lambda function:
resource "aws_apigatewayv2_route" "real_estate_lambda" {
api_id = aws_apigatewayv2_api.lambda.id
route_key = "ANY /property/{id}"
target = "integrations/${aws_apigatewayv2_integration.real_estate_lambda.id}"
}
Lambda permissions
The aws_lambda_permission
resource allows API Gateway to invoke the Lambda function securely.
resource "aws_lambda_permission" "api_gw" {
statement_id = "AllowExecutionFromAPIGateway"
action = "lambda:InvokeFunction"
function_name = aws_lambda_function.real_estate_lambda.function_name
principal = "apigateway.amazonaws.com"
source_arn = "${aws_apigatewayv2_api.lambda.execution_arn}/*/*"
}
IAM policy for DynamoDB access
The aws_iam_policy
resource grants the Lambda function permissions to perform all CRUD operations on the DynamoDB table.
resource "aws_iam_policy" "dynamodb_policy" {
name = "LambdaDynamoDBPolicy"
description = "Policy for Lambda to interact with DynamoDB"
policy = jsonencode({
Version = "2012-10-17",
Statement = [{
Effect = "Allow",
Action = [
"dynamodb:PutItem",
"dynamodb:GetItem",
"dynamodb:Scan",
"dynamodb:UpdateItem",
"dynamodb:DeleteItem"
],
Resource = aws_dynamodb_table.real_estate.arn
}]
})
}
Then, the aws_iam_role_policy_attachment
resource attaches the DynamoDB policy to the Lambda execution role.
resource "aws_iam_role_policy_attachment" "lambda_dynamodb_attach" {
policy_arn = aws_iam_policy.dynamodb_policy.arn
role = aws_iam_role.lambda_exec.name
}
Packaging lambda code
Now, it’s time to add the archive_file
data block:
data "archive_file" "application-code" {
type = "zip"
source_dir = "${path.module}/../../src/lambda"
output_path = "lambda.zip"
}
This data block packages the contents of the src/lambda
folder into a ZIP file, which will be used for deploying the Lambda function.
Creating the Lambda function
Now it’s time to create the actual Lambda function:
resource "aws_lambda_function" "real_estate_lambda" {
filename = data.archive_file.application-code.output_path
source_code_hash = data.archive_file.application-code.output_base64sha256
function_name = "real_state_api"
handler = "index.handler"
runtime = "nodejs20.x"
memory_size = 1024
timeout = 300
role = aws_iam_role.lambda_exec.arn
environment {
variables = {
TABLE_NAME = aws_dynamodb_table.real_estate.name
}
}
}
Ensure that you configure the runtime details, including memory allocation, timeout settings, handler specification, and environment variables for the DynamoDB table name.
Lambda Function URL
The next step is to generate a secure Lambda Function URL. This URL supports IAM-based access, which enhances security by requiring clients to provide an AccessKey
, SecretKey
, and SessionToken
when accessing the AWS Lambda function.
resource "aws_lambda_function_url" "real_estate_lambda_url" {
function_name = aws_lambda_function.real_estate_lambda.function_name
authorization_type = "AWS_IAM"
}
Set up CI/CD with CircleCI
You can obtain these AWS keys by creating a new IAM user in the AWS Management Console, assigning it the necessary permissions, and downloading the access key ID and secret access key.
Store AWS secrets in CircleCI
Now, create the environment variables and securely store your AWS Access Key and AWS Secret Access Key. This step is necessary so that your CircleCI project is granted permission to create resources on AWS on your behalf.
- Go to your CircleCI Project Settings.
- Under Environment Variables, add the keys and their respective values:
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
Set up Snyk and store auth token in CircleCI
To scan your Lambda function for security vulnerabilities, you’ll need to connect your Snyk account to your GitHub project:
- Log in to Snyk.
- Go to Projects > Add Project, and choose GitHub as the integration source
- Find and import your repository (e.g. “serverless-security-lambda-circleci”).
- Click on your personal account name.
- Go to Account Settings > General > API Token.
- Copy this token and add it as a new
SNYK_API_TOKEN
environment variable for your CircleCI project:
Add configuration script
Let’s begin by adding a CircleCI pipeline configuration file to automate the build and deployment process. Create a .circleci/config.yml
file at the root of your project and add this content:
version: 2.1
orbs:
terraform: circleci/terraform@3.6.0
aws-cli: circleci/aws-cli@4.1.2
snyk: snyk/snyk@2.3.0
jobs:
project_checkout:
machine:
image: ubuntu-2204:edge
docker_layer_caching: true
steps:
- checkout
- persist_to_workspace:
root: .
paths:
- .
build_lambdas:
docker:
- image: cimg/node:21.4.0
working_directory: ~/project
steps:
- attach_workspace:
at: ./
- run:
command: |
cd src/lambda
npm install
- persist_to_workspace:
paths:
- "."
root: "~"
- persist_to_workspace:
root: .
paths:
- src/lambda
security_scan:
docker:
- image: cimg/node:21.4.0
working_directory: ~/project
steps:
- attach_workspace:
at: ./
- run:
command: |
cd src/lambda
npm install
- run: echo "Running npm audit"
npm audit --audit-level=low
- run: echo "Running Snyk scan"
- snyk/scan:
token-variable: SNYK_API_TOKEN
target-file: src/lambda/package.json
severity-threshold: low
plan_infrastructure:
executor: terraform/default
steps:
- attach_workspace:
at: "./"
- attach_workspace:
at: /src/lambda
- run: echo "Executing terraform init"
- terraform/init:
path: infrastructure
backend_config: |
access_key=$AWS_ACCESS_KEY_ID,
secret_key=$AWS_SECRET_ACCESS_KEY
- run: echo "Executing terraform plan"
- terraform/plan:
path: infrastructure
var: |
terraform_aws_access_key=$AWS_ACCESS_KEY_ID
terraform_aws_secret_key=$AWS_SECRET_ACCESS_KEY
- run:
command: |
cd src/lambda
- persist_to_workspace:
paths:
- "."
root: "~"
apply_infrastructure:
executor: terraform/default
steps:
- attach_workspace:
at: "./"
- run: echo "Executing terraform apply"
- terraform/apply:
path: infrastructure
backend_config: |
access_key=$AWS_ACCESS_KEY_ID,
secret_key=$AWS_SECRET_ACCESS_KEY
var: |
terraform_aws_access_key=$AWS_ACCESS_KEY_ID
terraform_aws_secret_key=$AWS_SECRET_ACCESS_KEY
- persist_to_workspace:
paths:
- "."
root: "~"
workflows:
ci-cd:
jobs:
- project_checkout
- build_lambdas:
name: build_lambdas
requires:
- project_checkout
- security_scan:
name: security_scan
requires:
- build_lambdas
- plan_infrastructure:
name: plan_infrastructure
requires:
- security_scan
- apply_infrastructure:
name: apply_infrastructure
requires:
- plan_infrastructure
This CI pipeline leverages several CircleCI orbs to simplify tasks like working with AWS CLI, Terraform, and Snyk security scanning. The pipeline consists of separate jobs that handle source checkout, dependency installation, security scans, and infrastructure provisioning with Terraform.
The security_scan
step runs both npm audit
and a Snyk scan to catch vulnerabilities in your Lambda function dependencies. After that, terraform plan
and terraform apply
are used to validate and provision AWS infrastructure declared in the infrastructure
directory.
The configuration is designed to ensure secure and consistent deployments. Once you’ve committed and pushed this file to your GitHub repository, CircleCI will automatically run the workflow on every commit.
Note: Ensure that you have configured your AWS credentials and the
SNYK_API_TOKEN
in your CircleCI project environment variables.
Commit and push changes
Run the git commands to push the .circleci/config.yml
file, e.g.:
git add .circleci/config.yml
git commit -m "Add CircleCI pipeline for Terraform deployment"
git push origin main
After pushing, CircleCI will automatically trigger the pipeline, initializing, planning, and applying the Terraform configuration.
Verify the deployment
- In CircleCI, check your project pipeline:
- If successful, open your AWS Management Console and check for the deployed resources:
- AWS DynamoDB
- AWS Lambda
- API Gateway
Test the API
Use Postman or curl to test your API Gateway endpoint. But first, open your AWS Lambda page and grab the Function URL generated by Terraform via your CircleCI pipeline:
Get AWS credentials:
To test secured endpoints using Postman, you need temporary AWS credentials to authenticate your requests using AWS Signature Auth. Run this command to retrieve a session token along with an AccessKeyId
and SecretAccessKey
:
aws sts get-session-token
Once you have these credentials, go to Postman’s Authorization tab, choose AWS Signature, and fill in the AccessKey
, SecretKey
, and SessionToken
fields accordingly. This allows Postman to sign your requests so AWS can authenticate.
Create a real estate property
Test this out using Postman. Send an HTTP POST request to https://[YOUR-AWS-LAMBDA-FUNCTION URL]/property
. Use the data below as the request payload:
{
"PropertyID": "P123",
"Title": "Modern Loft",
"Description": "Great place downtown",
"PropertyType": "Rent",
"Price": 1200,
"PropertyLocation": "NYC"
}
Retrieve the real estate property
Now retrieve the property you just created by sending a GET request to the https://[YOUR-AWS-LAMBDA-FUNCTION URL]/property/P123
URL:
Bonus: Destroying resources
If you need to destroy the infrastructure, you can modify .circleci/config.yml
to include:
jobs:
terraform_destroy:
executor: terraform/default
steps:
- run: echo "Executing terraform plan"
- terraform/destroy:
path: infrastructure
var: |
terraform_aws_access_key=$AWS_ACCESS_KEY_ID
terraform_aws_secret_key=$AWS_SECRET_ACCESS_KEY
Then, add this job to your workflow when needed.
Conclusion
In this tutorial, you explored how to build and deploy a secure serverless application using AWS Lambda, DynamoDB, and CircleCI. With AWS Lambda, you could run code without managing servers, while DynamoDB provided a scalable and fully managed NoSQL database. You defined your cloud resources using Terraform, which made the infrastructure reproducible, testable, and version-controlled.
You also implemented security best practices such as fine-grained IAM roles, input validation, and XSS protection to safeguard your application and data. On the CI/CD side, CircleCI helped us automate the testing and deployment pipeline, ensuring that updates to your codebase are deployed quickly and safely.
You can check out the complete source code used in this tutorial on GitHub. Feel free to use the repository as a starting point for your own secure and automated serverless deployments.