Tracking Azure SQL changes with Azure Functions and CI/CD automation
Software Engineer
Imagine being able to automatically detect when a high-value order is placed, then log it and notify your sales team – without manually accessing your app code. Azure SQL Trigger Functions make this possible. By automating the response to database changes as they happen, you can streamline operations, sync data, and power workflows in near real-time.
Azure SQL Triggers, especially when combined with serverless functions, offers a powerful, low-maintenance way to respond to real-time data changes. Here are some key benefits:
- Real-time data responsiveness: React instantly to changes (INSERT/UPDATE/DELETE) without manual polling or cron jobs. Ideal for notifications, workflows, and syncing systems.
- No code changes to your app: Triggers operate at the database level, meaning you can augment behavior without modifying your application logic.
- Serverless efficiency: Run lightweight, event-driven logic only when needed. No servers to manage, and you pay per execution.
- Seamless integration with the Azure ecosystem: Connect easily to Event Grid, Service Bus, Logic Apps, Storage, or even Machine Learning endpoints, enabling full end-to-end workflows.
- Improved auditing and observability: Track, log, or back up changes automatically to improve compliance, debugging, and visibility.
- Flexiblity and scalability: Whether it is for a few events or thousands per minute, the architecture can scale dynamically.
In the sections that follow, we’ll demonstrate how to build a simple but realistic auto yard application using Python, Azure SQL, and Azure Functions. You’ll learn how serverless triggers can respond to database changes in real time and how to automate deployments using CI/CD pipelines.
Prerequisites
Here is what you’ll need before you start building:
- An Azure account to host your function.
- The Azure CLI to interact with Azure resources from your terminal.
- A CircleCI account for CI/CD automation.
- Some Python knowledge.
- Git installed and a GitHub account. These will be used for version control and repository management.
Notes:
- All Azure-related commands in this tutorial are shown using the locally installed Azure CLI. If you’re working from a machine where you can’t install tools, you can use the Azure Cloud Shell, which provides the same CLI experience through your browser.
- This tutorial uses Python in the examples, but you don’t have to be a Python pro. If you prefer another language, just follow along and translate the steps using your preferred tool.
Understanding the project architecture
To make the codebase easier to understand, here is the folder structure, followed by an explanation of what each part is responsible for.
.
├── .circleci
│ └── config.yml
├── .gitignore
├── host.json
├── track_changes
│ ├── function.json
│ └── __init__.py
└── requirements.txt
This project follows a clean and modular layout. At the root, you’ll find the .circleci/ folder, which contains the config.yml file. This is the configuration that defines the CI/CD pipeline for automated deployment using CircleCI. Also at the root, the .gitignore file ensures that unnecessary or sensitive files, such as virtual environments or compiled Python files, are excluded from version control.
The host.json file is used to configure global settings for the Azure Functions host, such as logging or extension bundles. The core function logic lives inside the track_changes/ directory. This folder contains two essential files:
function.jsondefines the function’s bindings.__init__.pyholds the Python code that processes changes coming from the SQL table.
The requirements.txt file lists the Python dependencies that need to be installed when running the function.
Create the Azure SQL database
Your Azure Functions need a database to work with. Specifically, this will be an Azure SQL instance to store our auto yard inventory and sales data. We’ll take care of this setup manually: creating the database, adding the schema, and inserting some test records. CI/CD will come into play for deploying the functions, but for this tutorial, you won’t be using it to provision the database.
Log into your Azure account and select a subscription type:
az login
Before running the SQL setup commands, it’s helpful to define a few variables in your terminal for convenience and consistency. These variables will be used throughout the CLI commands. If you need help, get a quick primer on Azure resource naming conventions.
From your terminal, run:
RG_NAME="carYardResourceGrp"
LOCATION="<preferred-region>"
SQL_SERVER_NAME="my-sql-server-$(openssl rand -hex 3)"
ADMIN_USER="caryardadmin"
ADMIN_PASS='<yourpassword>'
DB_NAME="myCarYardDB"
FIREWALL_RULE_NAME="AllowAll"
START_IP="0.0.0.0"
END_IP="255.255.255.255"
Here is a description of what each variable represents:
RG_NAME: Name of the Azure Resource Group to contain all your resources.LOCATION: Azure region where resources will be deployed (e.g.,westus). Useaz account list-locationsto get a list of programmatic names (under the"name"property) that you can use.SQL_SERVER_NAME: Unique name for the Azure SQL Server. A random suffix is added to avoid naming conflicts.ADMIN_USER: Username for the SQL Server admin.ADMIN_PASS: Password for the SQL Server admin. Always use a strong password.DB_NAME: Name of the actual SQL Database that will store your data.FIREWALL_RULE_NAME: Label for the firewall rule you’re about to create. In this case,AllowAllis used when opening access to all IP addresses (from 0.0.0.0 to 255.255.255.255), which can be helpful for quick testing or demos.START_IPandEND_IP: IP range allowed to access the SQL Server.0.0.0.0to255.255.255.255allows all IPs.
Note: Allowing all IPs is not recommended for production environments because it exposes your SQL Server to the internet. Always restrict access to trusted IP ranges whenever you can.
Create an Azure resource group:
az group create --name $RG_NAME --location $LOCATION
This command creates a resource group in the specified Azure region (--location $LOCATION). A resource group is a logical container for all related resources, like your SQL Server and database.
If you haven’t already, register the SQL resource provider:
az provider register --namespace Microsoft.Sql
Registering the Microsoft.Sql resource provider with your Azure subscription enables SQL-related services like Azure SQL Server and Database.
Verify SQL provider registration:
az provider list --query "[?namespace=='Microsoft.Sql']" --output table
This command lists the registration status of the Microsoft.Sql provider in a table format so you can confirm that it’s registered before you continue.
Create the Azure SQL server:
az sql server create \
--name $SQL_SERVER_NAME \
--resource-group $RG_NAME \
--location $LOCATION \
--admin-user $ADMIN_USER \
--admin-password $ADMIN_PASS
The output:
{
"administratorLogin": "caryardadmin",
"administratorLoginPassword": null,
"administrators": null,
"externalGovernanceStatus": "Disabled",
"federatedClientId": null,
"fullyQualifiedDomainName": "my-sql-server-xxxx.database.windows.net",
"id": "/subscriptions/xxxxxx/resourceGroups/carYardResourceGrp/providers/Microsoft.Sql/servers/my-sql-server-xxxxx",
"identity": null,
"isIPv6Enabled": null,
"keyId": null,
"kind": "v12.0",
"location": "eastus2",
"minimalTlsVersion": "1.2",
"name": "my-sql-server-xxxxx",
"primaryUserAssignedIdentityId": null,
"privateEndpointConnections": [],
"publicNetworkAccess": "Enabled",
"resourceGroup": "carYardResourceGrp",
"restrictOutboundNetworkAccess": "Disabled",
"state": "Ready",
"tags": null,
"type": "Microsoft.Sql/servers",
"version": "12.0",
"workspaceFeature": null
}
Create the SQL database:
az sql db create \
--resource-group $RG_NAME \
--server $SQL_SERVER_NAME \
--name $DB_NAME \
--service-objective Basic
This creates an SQL database within the SQL server you created earlier. The Basic service tier works well for testing and small workloads. It will produce this output:
{
...
"catalogCollation": "SQL_Latin1_General_CP1_CI_AS",
"collation": "SQL_Latin1_General_CP1_CI_AS",
"createMode": null,
"creationDate": "2025-07-11T05:54:56.397000+00:00",
...
"databaseId": "xxxxx",
"defaultSecondaryLocation": "centralus",
"id": "/subscriptions/xxxxx/resourceGroups/carYardResourceGrp/providers/Microsoft.Sql/servers/my-sql-server-xxxxx/databases/myCarYardDB",
"identity": null,
"maintenanceConfigurationId": "/subscriptions/xxxxx/providers/Microsoft.Maintenance/publicMaintenanceConfigurations/SQL_Default",
...
"type": "Microsoft.Sql/servers/databases",
"useFreeLimit": null,
"zoneRedundant": false
}
Create a Firewall rule to allow access to the server:
az sql server firewall-rule create \
--resource-group $RG_NAME \
--server $SQL_SERVER_NAME \
--name $FIREWALL_RULE_NAME \
--start-ip-address $START_IP \
--end-ip-address $END_IP
Get the SQL connection string:
az sql db show-connection-string \
--client ado.net \
--server $SQL_SERVER_NAME \
--name $DB_NAME
This command outputs a template connection string for connecting to your Azure SQL database using ADO.NET; the only format SQL Triggers work with. Replace <username> and <password> with your credentials. Store this connection string securely.
Go to your portal’s query editor. Create the tables and seed some data into the tables by running this query:
-- Create tables
CREATE TABLE Salesmen (
Id INT PRIMARY KEY IDENTITY(1,1),
Name NVARCHAR(100),
Email NVARCHAR(100)
);
CREATE TABLE Cars (
Id INT PRIMARY KEY IDENTITY(1,1),
Make NVARCHAR(50),
Model NVARCHAR(50),
Year INT,
Price DECIMAL(18,2)
);
CREATE TABLE Sales (
Id INT PRIMARY KEY IDENTITY(1,1),
CarId INT,
SalesmanId INT,
SaleDate DATE,
FOREIGN KEY (CarId) REFERENCES Cars(Id),
FOREIGN KEY (SalesmanId) REFERENCES Salesmen(Id)
);
-- Seed data
INSERT INTO Salesmen (Name, Email)
VALUES ('Alice Johnson', 'alice@caryard.com'),
('Bob Smith', 'bob@caryard.com');
INSERT INTO Cars (Make, Model, Year, Price)
VALUES ('Toyota', 'Camry', 2020, 25000),
('Ford', 'Mustang', 2018, 30000),
('Tesla', 'Model 3', 2022, 45000);
INSERT INTO Sales (CarId, SalesmanId, SaleDate)
VALUES (1, 1, '2024-06-01'),
(3, 2, '2024-06-05');
To enable change tracking at both the database and table levels, run these queries:
-- Enable Change Tracking on the database
ALTER DATABASE myCarYardDB
SET CHANGE_TRACKING = ON
(CHANGE_RETENTION = 2 DAYS, AUTO_CLEANUP = ON);
-- Enable Change Tracking on the Sales table
ALTER TABLE Sales
ENABLE CHANGE_TRACKING
WITH (TRACK_COLUMNS_UPDATED = ON);
With your schema set up, it’s time to bring in the logic layer.
Building the Azure Trigger Function
In this section, you’ll build the Azure SQL-triggered function that will respond to events in your car yard database.
Create the directory structure
Create the folders (for MacOS/Linux):
mkdir -p SQLChangeFunction/.circleci
mkdir -p SQLChangeFunction/track_changes
cd SQLChangeFunction
For Windows:
mkdir SQLChangeFunction\.circleci
mkdir SQLChangeFunction\track_changes
cd SQLChangeFunction
Create the files (for MacOS/Linux):
touch .circleci/config.yml
touch .gitignore
touch host.json
touch requirements.txt
touch track_changes/function.json
touch track_changes/__init__.py
For Windows, touch isn’t available, so you’ll use type nul > filename or echo.
Using Command Prompt or PowerShell, run:
type nul > .circleci\config.yml
type nul > .gitignore
type nul > host.json
type nul > requirements.txt
type nul > track_changes\function.json
type nul > track_changes\__init__.py
Adding dependencies
Now you can add the Python dependencies you need for your Azure Function app. These packages will be installed when the function is deployed and are essential for it to function.
Open the requirements.txt file you created earlier. In the text file, paste this line:
azure-functions
azure-functions contains the core SDK for Azure Functions in Python. It provides decorators and helpers to build functions that respond to events like SQL triggers, queues, blobs, and more.
Configure host.json
The host.json file contains global configuration settings for your Azure Function app. It controls logging behavior, extension bundles, and runtime options.
Go to the project root, open the host.json file, and paste this content into it:
{
"version": "2.0",
"logging": {
"applicationInsights": {
"samplingSettings": {
"isEnabled": true,
"excludedTypes": "Request"
}
}
},
"extensionBundle": {
"id": "Microsoft.Azure.Functions.ExtensionBundle",
"version": "[4.*, 5.0.0)"
}
}
"version": "2.0"specifies the Azure Functions runtime version. Version 2.0+ supports modern features and languages like Python."logging"configures Application Insights logging behavior. Here, request telemetry is excluded from sampling to reduce noise while keeping useful logs."extensionBundle"enables Azure to automatically install necessary function extensions (e.g., for HTTP triggers, bindings). The version range[4.*, 5.0.0)ensures compatibility with current stable releases.
You rarely need to change this file unless you’re enabling specific bindings or tweaking telemetry.
Note: For this tutorial, you won’t be running the Azure Function app locally. That means there’s no need to create a local.settings.json file. All configurations, including environment variables like the SQL connection string, will be handled during deployment via CircleCI or directly within Azure.
Set up .gitignore
The .gitignore file ensures that unnecessary, sensitive, or environment-specific files are excluded from version control. In the root of your SQLChangeFunction directory, open or edit the .gitignore file. Add the following content:
# Byte-compiled / cache files
__pycache__/
*.py[cod]
*.pyo
*.pyd
*.pdb
*.egg-info/
*.eggs/
*.manifest
*.spec
# Azure Functions Core Tools artifacts
.azure/
bin/
obj/
.vscode/
# Python virtual environment
.env/
.venv/
env/
venv/
# macOS
.DS_Store
# VS Code settings
.vscode/
# Test output
.coverage
htmlcov/
.tox/
nosetests.xml
coverage.xml
*.cover
*.log
# CircleCI
.circleci/cache/
- Python cache files (
__pycache__/,*.pyc): These are generated at runtime and don’t need to be tracked. - Azure/CI Artifacts (
.azure/,.circleci/cache/): Keeps deployment tooling and cache folders out of version control. - Virtual environments (
.venv/,env/): These are local setups and should never be committed. - Editor settings (
.vscode/) and system files (.DS_Store): Excluded to keep the project editor-agnostic and platform-neutral.
Create the track_changes Azure Function (with Detailed Explanations)
In this step, you will implement the Azure Function that reacts to changes in the Sales table using an Azure SQL trigger. This function helps you detect inserts, updates, and deletions and respond accordingly using Python logic.
Define the trigger binding in function.json:
{
"bindings": [
{
"name": "changes",
"type": "sqlTrigger",
"direction": "in",
"tableName": "Sales",
"connectionStringSetting": "SQL_CONNECTION_STRING"
}
]
}
"bindings": A list of all triggers and input/output bindings for the function."name": "changes": This is the variable name that will be passed into your Pythonmain()function. It will contain the raw JSON payload of changes."type": "sqlTrigger": This tells Azure Functions that the trigger is listening to changes in an Azure SQL table."direction": "in": This means the data is being passed into the function — not output."tableName": "Sales": The table that Azure will monitor for changes."connectionStringSetting": "SQL_CONNECTION_STRING": This points to a connection string configured in the Azure portal. More about this later on in the tutorial.
__init__.py: Writing the trigger logic in Python
In the __init__.py file, add this code:
import json
import logging
def main(changes: str) -> None:
try:
logging.info("SQL Trigger fired!")
logging.info(f"Raw change payload: {changes}")
rows = json.loads(changes)
logging.info(f"Number of changes: {len(rows)}")
for row in rows:
op = row.get("Operation")
item = row.get("Item", {})
sale_id = item.get("Id")
car_id = item.get("CarId")
salesman_id = item.get("SalesmanId")
sale_date = item.get("SaleDate")
# Translate operation type
operation_map = {0: "INSERT", 1: "UPDATE", 2: "DELETE"}
op_type = operation_map.get(op, f"Unknown ({op})")
logging.info(f"{op_type} -> SaleID={sale_id}, CarID={car_id}, SalesmanID={salesman_id}, SaleDate={sale_date}")
except Exception as e:
logging.error("An error occurred while processing SQL changes.")
logging.error(f"Exception: {str(e)}", exc_info=True)
The code starts by importing the required libraries:
import json
import logging
logging is used to write logs, and json is needed to parse the incoming string into structured data.
The Azure Functions runtime invokes the main method and passes in a string named changes.
def main(changes: str) -> None:
This string contains a JSON array representing all the changed rows detected by the SQL trigger:
logging.info("SQL Trigger fired!")
logging.info(f"Raw change payload: {changes}")
The previous lines confirm that the trigger worked and log the raw payload for visibility.
rows = json.loads(changes)
logging.info(f"Number of changes: {len(rows)}")
- The string payload is parsed into a Python list of dictionaries.
- Logs how many changes were received in this batch (Azure SQL sends multiple rows at once).
Processing each row
for row in rows:
op = row.get("Operation")
item = row.get("Item", {})
- A loop is used to iterate through each row in the batch.
-
Operationis an integer that indicates the type of change:0=INSERT1=UPDATE2=DELETE
Itemis the actual row data (or previous version of it, in the case of deletes).
Extract and log relevant fields
The fields match your table schema. They’re extracted for logging or downstream use:
sale_id = item.get("Id")
car_id = item.get("CarId")
salesman_id = item.get("SalesmanId")
sale_date = item.get("SaleDate")
The next snippet provides human-readable labels for the operation type:
operation_map = {0: "INSERT", 1: "UPDATE", 2: "DELETE"}
op_type = operation_map.get(op, f"Unknown ({op})")
The final log shows the record changes, with key values, so it’s easy to monitor or debug:
logging.info(f"{op_type} -> SaleID={sale_id}, CarID={car_id}, SalesmanID={salesman_id}, SaleDate={sale_date}")
Error handling
If anything goes wrong, the function logs the full exception stack trace for troubleshooting.
except Exception as e:
logging.error("An error occurred while processing SQL changes.")
logging.error(f"Exception: {str(e)}", exc_info=True)
Automating deployment with CircleCI
After building the function app, you can begin working on the automation. In this section, you’ll configure CircleCI to automatically deploy your Azure Function app whenever you push new code.
Configuring a CI/CD pipeline using CircleCI ensures that every time you push to the main branch, CircleCI automatically builds and deploys your serverless app to Azure. You’ll define our pipeline in the .circleci/config.yml file, which contains all the jobs, steps, and logic needed for a successful deployment.
Note: You are not restricted to using the main branch; you can use any branch you like. If you use a different branch name, just make sure you change it in the .circleci/config.yml file.
Go to the .circleci directory in your project and open config.yml. Paste this configuration into the file:
version: 2.1
orbs:
node: circleci/node@7.1.0
azure-cli: circleci/azure-cli@1.2.0
jobs:
deploy:
docker:
- image: cimg/python:3.13.5
steps:
- checkout
- node/install:
node-version: '18.17'
- azure-cli/install
- run:
name: Install Python dependencies
command: pip install -r requirements.txt
- run:
name: Azure CLI login
command: |
az login --service-principal \
-u $AZURE_CLIENT_ID \
-p $AZURE_CLIENT_SECRET \
--tenant $AZURE_TENANT_ID
- run:
name: Register Microsoft.Storage and Microsoft.Web provider
command: |
echo "Registering Microsoft.Storage resource provider..."
az provider register --namespace Microsoft.Storage
echo "Registering Microsoft.Web resource provider..."
az provider register --name Microsoft.Web
- run:
name: Ensure Function App + Storage Account exist
command: |
echo "Checking storage account..."
if ! az storage account show --name "$AZURE_STORAGE_ACCOUNT" --resource-group "$AZURE_RESOURCE_GROUP_NAME" &>/dev/null; then
echo "Storage account not found. Creating..."
az storage account create \
--name "$AZURE_STORAGE_ACCOUNT" \
--location "$AZURE_LOCATION" \
--resource-group "$AZURE_RESOURCE_GROUP_NAME" \
--sku Standard_LRS
else
echo "Storage account already exists."
fi
echo "Checking function app..."
if ! az functionapp show --name "$AZURE_FUNCTION_APP_NAME" --resource-group "$AZURE_RESOURCE_GROUP_NAME" &>/dev/null; then
echo "Function App not found. Creating..."
az functionapp create \
--name "$AZURE_FUNCTION_APP_NAME" \
--storage-account "$AZURE_STORAGE_ACCOUNT" \
--resource-group "$AZURE_RESOURCE_GROUP_NAME" \
--consumption-plan-location "$AZURE_LOCATION" \
--runtime python \
--functions-version 4 \
--os-type Linux
else
echo "Function App already exists."
fi
- run:
name: Enforce correct Python version
command: |
az functionapp config set \
--name "$AZURE_FUNCTION_APP_NAME" \
--resource-group "$AZURE_RESOURCE_GROUP_NAME" \
--linux-fx-version "PYTHON|3.10"
- run:
name: Deploy Azure Function App
command: |
npm install -g azure-functions-core-tools@4 --unsafe-perm true
func azure functionapp publish "$AZURE_FUNCTION_APP_NAME" --python --debug
workflows:
deploy_main:
jobs:
- deploy:
filters:
branches:
only: main
Understanding the config file
This configuration defines a single deploy job that runs in a Python 3.13 Docker image. It uses two CircleCI orbs:
- The
nodeorb to install Node.js, required for Azure Functions Core Tools. - The
azure-cliorb to run Azure CLI commands inside the container.
The Azure CLI Login (az login --service-principal) logs into Azure using a service principal, which is a secure, app-based identity. The credentials are passed in as environment variables (which you will be adding soon).
$AZURE_CLIENT_ID: The app/client ID of the service principal.$AZURE_CLIENT_SECRET: The secret/password for that app.$AZURE_TENANT_ID: The directory/tenant ID where the service principal lives.
Register resource providers
Before creating services like Function apps or Storage accounts, Azure needs the corresponding resource providers to be registered:
az provider register --namespace Microsoft.Storageregisters theStorageresource provider, which manages blob containers, file shares, and storage accounts.az provider register --name Microsoft.Webregisters theWebprovider, which is required for deploying Function apps, App Services, and Web Apps.
Create the Storage account
The script first checks if the storage account exists:
az storage account show --name "$AZURE_STORAGE_ACCOUNT" --resource-group "$AZURE_RESOURCE_GROUP_NAME"
If it does not exist, create it by running:
az storage account create \
--name "$AZURE_STORAGE_ACCOUNT" \
--location "$AZURE_LOCATION" \
--resource-group "$AZURE_RESOURCE_GROUP_NAME" \
--sku Standard_LRS
--sku Standard_LRScreates a locally redundant storage (LRS) account, which is the most cost-effective and default redundancy option.--location: This should match the region where your Function app will live.
Note: The storage account is required by Azure Functions to store internal metadata, logs, and scaling data.
Creating the Azure Function app
the script checks for an existing Function app:
az functionapp show --name "$AZURE_FUNCTION_APP_NAME" --resource-group "$AZURE_RESOURCE_GROUP_NAME"
If it’s missing, it’s created using:
az functionapp create \
--name "$AZURE_FUNCTION_APP_NAME" \
--storage-account "$AZURE_STORAGE_ACCOUNT" \
--resource-group "$AZURE_RESOURCE_GROUP_NAME" \
--consumption-plan-location "$AZURE_LOCATION" \
--runtime python \
--functions-version 4 \
--os-type Linux
--runtime pythonspecifies Python as the language runtime.--functions-version 4targets version 4 of the Azure Functions runtime.--consumption-plan-locationdeploys to the serverless consumption plan (autoscaling and pay-per-use).--os-type Linuxuses a Linux-based hosting environment that supports Python.
Enforce the correct Python version
Even though a Python runtime is specified at creation, Azure may default to an older version. To ensure that the Function app uses Python 3.10, run:
az functionapp config set \
--name "$AZURE_FUNCTION_APP_NAME" \
--resource-group "$AZURE_RESOURCE_GROUP_NAME" \
--linux-fx-version "PYTHON|3.10"
This avoids runtime mismatch issues, especially if your code depends on Python 3.10+ features.
Publish the Function
The final step installs the Azure Functions Core Tools and runs the publish command:
npm install -g azure-functions-core-tools@4 --unsafe-perm true
func azure functionapp publish "$AZURE_FUNCTION_APP_NAME" --python --debug
- This pushes your local code to the live Function app in Azure.
- The
--debugflag increases verbosity, which is helpful for CI logs. --pythonensures the correct language runtime is applied during deployment.
Create the Git repository and push it to GitHub
Initialize Git and connect it to a GitHub repository to prepare for CI/CD.
Initialize a local Git repository:
git init
Stage your files:
git add .
Commit your changes:
git commit -m "Initial commit"
Create a new GitHub repository:
Go to GitHub and create a new repository. You can leave it empty (no README or .gitignore).
Copy the repository URL (https://github.com/your-username/your-repo.git).
Go back to your terminal, add the remote URL, and push your code:
git remote add origin https://github.com/your-username/your-repo.git
git branch -M main
git push -u origin main
You will need to add some environment variables. Go to your CircleCI dashboard and create a new project.
Set up environment variables in CircleCI
Before the deployment pipeline can work, you’ll need to securely add a few environment variables to CircleCI. These will provide the necessary credentials and configuration parameters used by the config.yml file.
Here’s a list of the variables you’ll need to define in your CircleCI project settings:
AZURE_CLIENT_IDAZURE_CLIENT_SECRETAZURE_TENANT_IDAZURE_FUNCTION_APP_NAMEAZURE_STORAGE_ACCOUNTAZURE_RESOURCE_GROUP_NAMEAZURE_LOCATION
Setting the values
AZURE_RESOURCE_GROUP_NAME: Use the same resource group name that you created earlier when setting up the Azure SQL Database and Function app locally.AZURE_LOCATION: This should also match the region you’ve used locally (e.g.,eastus,westeurope, etc.).AZURE_STORAGE_ACCOUNTandAZURE_FUNCTION_APP_NAME: You can name these with the name of your choice. Make sure the names are between 3 and 24 characters, composed of numbers and lower-case letters only, and globally unique.
Obtaining AZURE_CLIENT_SECRET and AZURE_TENANT_ID
These values come from a special Azure Service Principal, a non-human identity used to authenticate and perform actions in your Azure subscription from CI/CD tools.
Create a service principal
In your terminal, run:
az ad sp create-for-rbac --name azurefndeployer --sdk-auth
This command creates a new service principal named azurefndeployer with the necessary permissions. Here’s the JSON object it returns:
{
"clientId": "xxxxxx",
"clientSecret": "xxxxxx",
"subscriptionId": "xxxxxx",
"tenantId": "xxxxxx",
...
}
Copy the value of:
"clientSecret"and assign it toAZURE_CLIENT_SECRET."tenantId"and assign it toAZURE_TENANT_ID."clientId"and assign it toAZURE_CLIENT_ID
Note: You may also want to save "clientId" and "subscriptionId" for the next step.
Grant permissions to the service principal
Next, assign the Owner role to this service principal. This role gives it sufficient rights to create and manage Azure resources like Function apps and Storage Accounts.
Replacing placeholders with values from the earlier output, run this command:
az role assignment create \
--assignee "<CLIENT-ID>" \
--role "Owner" \
--scope "/subscriptions/<SUBSCRIPTION-ID>"
- Replace
<CLIENT-ID>with the value of"clientId" - Replace
<SUBSCRIPTION-ID>with the value of"subscriptionId"
This grants the service principal full control over your Azure subscription and ensures that your CircleCI deployment process won’t have permission issues.
Trigger the pipeline manually. If everything goes well, there will be a green badge labeled Success, along with a deploy job. Click the deploy job and expand the steps to display the details.
For the last step, you should see something like the text below in the output:
Functions in ***********:
track_changes - [sqlTrigger]
Test the live function
Before you can call your APIs and retrieve data, ensure the function app knows how to talk to the database. Go to the Azure portal and add the SQL connection string to the app settings. Retrieve the connection string you changed earlier and keep it nearby. You will be using it soon.
Before your Function app can retrieve data from the Azure SQL database, you need to securely add the connection string to its configuration. This tells the app exactly how to connect to the database when handling HTTP requests.
Here’s how to do it:
-
Go to the Azure Portal and open your resource groups. You will get a list of your Azure resource groups.
-
Find and click on the resource group you created earlier; the same one you used when provisioning your SQL Server and Function app.
-
Inside the resource group, find and click your Function app.
Once the app loads, your deployed function (track_changes) page should be shown toward the bottom of the overview.
-
On the left navigation pane, expand the Settings section, then click Environment variables.
-
Switch from the default App settings tab to the Connection strings tab.
-
Click the + Add button. Enter this information on the form:
- Name: Enter
SQL_CONNECTION_STRING. - Value: Paste the full connection string you generated earlier. Remember to replace
<username>and<password>in the generated string before setting the value. - Type: Select SQLServer.
- Name: Enter
-
Check the box for Deployment slot setting. This ensures your connection string persists across deployments and slots.
-
Click Apply. Azure will restart the Function app to apply the new configuration. After it is restarted, your Function app will have access to the database. You will be able to test it live.
Test the Live function
To confirm everything is working as expected, we’ll trigger some changes in the Sales table and observe how the function processes them. We’ll do that by inserting and updating records, then observing the function’s behavior via the Azure Portal logs.
Open Function logs
To observe the function in real time, click your Function. Go to the Logs tab.
Note: Logs can take a few seconds to appear. If they’re delayed you can click Log Stream back on the overview page. This opens a real-time stream that captures all logs from your Function.
Insert new sales records
You can simulate new car sales in the Query Editor (Preview) of your Azure SQL Database resource. Run:
-- Insert new sales records
INSERT INTO Sales (CarId, SalesmanId, SaleDate)
VALUES
(2, 1, '2024-06-10'),
(1, 2, '2024-06-12'),
(3, 1, '2024-06-15');
If everything is correct, the Azure Function will be triggered. Your logs should be similar to:
[Information] SQL Trigger fired!
[Information] Raw change payload: [{"Operation":0,"Item":{"Id":111,"CarId":2,"SalesmanId":1,"SaleDate":"2024-06-10T00:00:00"}}, ... ]
[Information] Number of changes: 3
[Information] INSERT -> SaleID=111, CarID=2, SalesmanID=1, SaleDate=2024-06-10T00:00:00
...
This confirms that the function correctly detected multiple inserts and logged the relevant details.
Test an update
Now update one of the records you just inserted:
-- Update an existing sale
UPDATE Sales
SET SalesmanId = 2
WHERE Id = 96;
Again, check the logs:
[Information] SQL Trigger fired!
[Information] Raw change payload: [{"Operation":1,"Item":{"Id":96,"CarId":2,"SalesmanId":2,"SaleDate":"2024-06-10T00:00:00"}}]
[Information] Number of changes: 1
[Information] UPDATE -> SaleID=96, CarID=2, SalesmanID=2, SaleDate=2024-06-10T00:00:00
The trigger should correctly interpret this as an update and log it accordingly. Congratulations, you’re done!
Find the full code for this project in this GitHub repository.
Conclusion
You just wired up an end-to-end solution: an auto yard app that reacts to SQL changes in real time, logs the results, and deploys automatically using CircleCI.
If you’re thinking about taking this into production, here are a few things to keep in mind:
- Use Azure Table Storage or Blob to store your last sync version reliably.
- Add retry logic for failures, especially around downstream APIs.
- Offload heavy lifting to queues to avoid blocking your trigger function.
- Keep an eye on function timeouts and set autoscale rules if needed.
Ready to take it further? Try adding more tables, handling deletions differently, or hooking into other services, and keep CircleCI in your flow for rapid iteration.