Automating deployment of Azure Functions
Software Engineer
Serverless platforms free developers from provisioning servers, configuring runtimes, and worrying about scaling or availability, handling all that behind the scenes. This enables cloud development teams to shift focus from infrastructure maintenance to delivering direct business value.
Azure Functions (Microsoft’s serverless offering) allows you to execute code on demand, scale automatically, and pay only for what you use. It combines built-in support for multiple programming languages, integration with other Azure services, and rich monitoring capabilities through Application Insights. Azure Functions offers a powerful platform for building resilient cloud-native applications.
This tutorial will guide you through the deployment of a simple, real-world example: an auto yard management application that uses Azure Functions for business logic and Azure SQL as the data store. The deployment will be automated through CircleCI for CI/CD.
Prerequisites
To successfully deploy an Azure Function through CircleCI, you will need:
- An Azure account to host your function.
- The Azure CLI to interact with Azure resources from your terminal.
- A CircleCI account for CI/CD automation.
- Some Python knowledge.
- Git installed and a GitHub account. These will be used for version control and repository management.
Notes:
- All Azure-related commands in this tutorial are shown using the locally installed Azure CLI. If you’re working from a machine where you can’t install tools, you can use the Azure Cloud Shell, which provides the same CLI experience through your browser.
- You’ll be using Python in the examples, but you don’t have to be a Python pro. If you prefer another language, you can follow along and translate the steps to your preferred tool.</i>
Review the project architecture
To help you navigate the codebase, review this break down the folder structure describing the role each part has in the bigger picture. Here is the directory tree:
.
├── .circleci
│ └── config.yml
├── get_cars
│ ├── function.json
│ └── __init__.py
├── get_salesmen
│ ├── function.json
│ └── __init__.py
├── .gitignore
├── host.json
├── requirements.txt
└── shared
└── db.py
The project is structured for clarity, modularity, and ease of deployment using Azure Functions and CircleCI. At the top level:
- The
.circleci/directory contains theconfig.ymlfile, which defines the CI/CD pipeline. - The pipeline automates the build and deployment process to Azure. Each function in the application is housed in its own folder (
get_cars/andget_salesmen/). - Inside these folders,
function.jsonspecifies the trigger and binding configuration (like HTTP routes), while__init__.pycontains the actual function logic written in Python.
To avoid repeating code, the project includes a shared/ directory, where db.py holds reusable database utility functions for connecting to the Azure SQL backend.
Supporting files include:
host.json, which provides global configuration settings for the function app,.gitignoreto prevent unwanted files from being committed to version control,requirements.txt, which lists the Python dependencies needed for the app to run.
Note: The tutorial project supports only data retrieval. For now, leave the full CRUD implementation as a future challenge.
Creating the Azure SQL database
Before we can deploy our Azure Functions, we need to set up the Azure SQL Database that will store our car yard data. In this section, we’ll walk through creating the database, defining the schema, and seeding initial data. For simplicity, we’ll handle this setup manually. CircleCI will only handle deployment, not database provisioning.
To get started, make sure you’ve logged into your Azure account and selected the correct subscription.
az login
Before running the SQL setup commands, it’s helpful to define a few variables in your terminal for convenience and consistency. These variables will be used throughout the CLI commands.
If you need to, get a quick primer on Azure resource naming conventions.
In your terminal, run:
RG_NAME="carYardResourceGrp"
LOCATION="eastus"
SQL_SERVER_NAME="my-sql-server-$(openssl rand -hex 3)"
ADMIN_USER="caryardadmin"
ADMIN_PASS='yourpassword'
DB_NAME="myCarYardDB"
FIREWALL_RULE_NAME="AllowAll"
START_IP="0.0.0.0"
END_IP="255.255.255.255"
Here is what each variable represents:
RG_NAME: Name of the Azure Resource Group to contain all your resources.LOCATION: Azure region where resources will be deployed (e.g.,eastus).SQL_SERVER_NAME: Unique name for the Azure SQL Server. A random suffix is added to avoid naming conflicts.ADMIN_USER: Username for the SQL Server admin.ADMIN_PASS: Password for the SQL Server admin (use a strong one).DB_NAME: Name of the actual SQL Database that will store your data.FIREWALL_RULE_NAME: Label for the firewall rule you’re about to create. In this case,AllowAllis used when opening access to all IP addresses (from 0.0.0.0 to 255.255.255.255), which can be helpful for quick testing or demos.
Note: Allowing all IPs is not recommended for production environments because it exposes your SQL Server to the internet. Always restrict access to trusted IP ranges only whenever possible.
START_IPandEND_IP: IP range allowed to access the SQL Server.0.0.0.0to255.255.255.255allows all IPs.
Create an Azure resource group
az group create --name $RG_NAME --location $LOCATION
This command creates a resource group in the specified Azure region (--location $LOCATION). A resource group is a logical container for all related resources like your SQL Server and database.
Register the SQL resource provider (if you haven’t already):
az provider register --namespace Microsoft.Sql
This registers the Microsoft.Sql resource provider with your Azure subscription, enabling SQL-related services like Azure SQL Server and Database.
Verify SQL provider registration:
az provider list --query "[?namespace=='Microsoft.Sql']" --output table
The command lists the registration status of the Microsoft.Sql provider in a table format so you can confirm that it is registered before you proceed.
Create the Azure SQL server:
az sql server create \
--name $SQL_SERVER_NAME \
--resource-group $RG_NAME \
--location $LOCATION \
--admin-user $ADMIN_USER \
--admin-password $ADMIN_PASS
Create the SQL database:
az sql db create \
--resource-group $RG_NAME \
--server $SQL_SERVER_NAME \
--name $DB_NAME \
--service-objective Basic
This creates an SQL database within the SQL server you just created. The Basic service tier works well for testing and small workloads.
Create a firewall rule to allow access to the server:
az sql server firewall-rule create \
--resource-group $RG_NAME \
--server $SQL_SERVER_NAME \
--name $FIREWALL_RULE_NAME \
--start-ip-address $START_IP \
--end-ip-address $END_IP
Get the SQL connection string:
az sql db show-connection-string \
--client odbc \
--server $SQL_SERVER_NAME \
--name $DB_NAME
This command outputs a template connection string for connecting to your Azure SQL Database using ODBC.
You will receive output similar to:
Driver={ODBC Driver 13 for SQL Server};Server=tcp:my-sql-server-XXXX.database.windows.net,1433;Database=myDatabase;Uid=<username>@my-sql-server-XXXX;Pwd=<password>;Encrypt=yes;TrustServerCertificate=no;
Note: Replace <username> and <password> with your actual admin credentials; replace 13 with 17 for the string to work properly after deployment.
Store this connection string securely.
Go to your portal’s query editor.
Create the tables and seed some data by running this query:
-- Create tables
CREATE TABLE Salesmen (
Id INT PRIMARY KEY IDENTITY(1,1),
Name NVARCHAR(100),
Email NVARCHAR(100)
);
CREATE TABLE Cars (
Id INT PRIMARY KEY IDENTITY(1,1),
Make NVARCHAR(50),
Model NVARCHAR(50),
Year INT,
Price DECIMAL(18,2)
);
CREATE TABLE Sales (
Id INT PRIMARY KEY IDENTITY(1,1),
CarId INT,
SalesmanId INT,
SaleDate DATE,
FOREIGN KEY (CarId) REFERENCES Cars(Id),
FOREIGN KEY (SalesmanId) REFERENCES Salesmen(Id)
);
-- Seed data
INSERT INTO Salesmen (Name, Email)
VALUES ('Alice Johnson', 'alice@caryard.com'),
('Bob Smith', 'bob@caryard.com');
INSERT INTO Cars (Make, Model, Year, Price)
VALUES ('Toyota', 'Camry', 2020, 25000),
('Ford', 'Mustang', 2018, 30000),
('Tesla', 'Model 3', 2022, 45000);
INSERT INTO Sales (CarId, SalesmanId, SaleDate)
VALUES (1, 1, '2024-06-01'),
(3, 2, '2024-06-05');
With the database ready, you can move on to building the Azure Function app that will interact with it.
Building the Azure Function app
Now you can start working on the Azure Function App. This is where you will define the logic for handling requests, querying the database, and returning structured responses.
Setting up the project directory
Create and enter the main project directory:
mkdir CarYardApp
cd CarYardApp
Create the project folder structure.
For Linux/macOS (Unix-based systems):
mkdir -p .circleci get_cars get_salesmen shared
touch .circleci/config.yml
touch get_cars/function.json get_cars/__init__.py
touch get_salesmen/function.json get_salesmen/__init__.py
touch shared/db.py
touch .gitignore
touch host.json
touch requirements.txt
For Windows (command prompt):
mkdir .circleci && cd .circleci && type nul > config.yml && cd ..
mkdir get_cars && cd get_cars && type nul > function.json && type nul > __init__.py && cd ..
mkdir get_salesmen && cd get_salesmen && type nul > function.json && type nul > __init__.py && cd ..
mkdir shared && cd shared && type nul > db.py && cd ..
type nul > .gitignore
type nul > host.json
type nul > requirements.txt
Note: If you’re using PowerShell, replace type nul > filename with New-Item filename -ItemType File.
Add dependencies
Next, define the Python dependencies you’ll need for your Azure Function app. These packages will be installed when the function is deployed and are essential for its functionality.
Open the requirements.txt file you created earlier, or run this in your terminal:
nano requirements.txt
Paste this content into the text file:
azure-functions
pyodbc
azure-functions: This is the core SDK for Azure Functions in Python. It provides decorators and helpers to build functions that respond to events like HTTP requests, queues, blobs, etc.pyodbc: A Python library for accessing databases using ODBC drivers. In this case, it will let us connect to the Azure SQL Database from our Python functions using the connection string you generated earlier.
Configure host.json
The host.json file contains global configuration settings for your Azure Function app. It controls logging behavior, extension bundles, and runtime options.
If you’re not already in the project root, go to it. Open the host.json file and paste this content:
{
"version": "2.0",
"logging": {
"applicationInsights": {
"samplingSettings": {
"isEnabled": true,
"excludedTypes": "Request"
}
}
},
"extensionBundle": {
"id": "Microsoft.Azure.Functions.ExtensionBundle",
"version": "[4.*, 5.0.0)"
}
}
"version": "2.0": Specifies the Azure Functions runtime version. Version 2.0+ supports modern features and languages like Python."logging": Configures Application Insights logging behavior. Here, request telemetry is excluded from sampling to reduce noise while keeping useful logs."extensionBundle": Enables Azure to automatically install necessary function extensions (e.g., for HTTP triggers, bindings). The version range[4.*, 5.0.0)ensures compatibility with current stable releases.
You rarely need to change this file unless you’re enabling specific bindings or tweaking telemetry.
** Note:** You won’t be running the Azure Function app locally in this tutorial, you don’t need to create a local.settings.json file. All the configurations, including environment variables like the SQL connection string, will be handled during deployment, either using CircleCI or directly within Azure.
Set up .gitignore
The .gitignore file ensures that unnecessary, sensitive, or environment-specific files are excluded from version control. In the root of your CarYardApp directory, open or edit the .gitignore file. Add this content:
# Byte-compiled / cache files
__pycache__/
*.py[cod]
*.pyo
*.pyd
*.pdb
*.egg-info/
*.eggs/
*.manifest
*.spec
# Azure Functions Core Tools artifacts
.azure/
bin/
obj/
.vscode/
# Python virtual environment
.env/
.venv/
env/
venv/
# macOS
.DS_Store
# VS Code settings
.vscode/
# Test output
.coverage
htmlcov/
.tox/
nosetests.xml
coverage.xml
*.cover
*.log
# CircleCI
.circleci/cache/
- Python cache files (
__pycache__/,*.pyc): These are generated at runtime and don’t need to be tracked. - Azure/CI Artifacts (
.azure/,.circleci/cache/): Keeps deployment tooling and cache folders out of version control. - Virtual environments (
.venv/,env/): These are local setups and should never be committed. - Editor settings (
.vscode/) and system files (.DS_Store): Excluded to keep the project editor-agnostic and platform-neutral.
Create shared/db.py
This module provides a reusable database utility. It establishes a connection to the Azure SQL Database using the pyodbc library. It’s imported by individual functions (get_cars, get_salesmen) to avoid duplicating connection logic.
Open shared/db.py and add:
import pyodbc
import os
import logging
import traceback
def get_connection():
try:
conn_str = os.getenv("SQLCONNSTR_SQL_CONNECTION_STRING")
return pyodbc.connect(conn_str)
except Exception as e:
logging.error("Failed to establish DB connection.")
logging.error(f"Error: {e}")
logging.error("Stack trace:\n" + traceback.format_exc())
raise # Re-raise the exception after logging it
You might think that the environment variable name (SQLCONNSTR_SQL_CONNECTION_STRING) seems unusual. That’s intentional. Azure App Service automatically prefixes SQL Server connection strings with SQLCONNSTR_ when stored as application settings. This is a special convention used by Azure to identify and handle SQL Server connection strings securely.
So if your connection string key in Azure is named SQL_CONNECTION_STRING, Azure exposes it to your function as:
SQLCONNSTR_SQL_CONNECTION_STRING
You can learn more about Azure environment variable prefixes here: Azure App Settings Variable Prefixes (Microsoft Docs).
Create the get_cars Azure function
The get_cars function is an HTTP-triggered Azure function that connects to the Azure SQL database and returns a list of cars in JSON format.
This function demonstrates how to:
- Use the shared
get_connection()utility fromshared/db.py - Query the database
- Format and return the result as JSON
Open the get_cars/__init__.py file and paste this code:
import azure.functions as func
import logging
from shared.db import get_connection
def main(req: func.HttpRequest) -> func.HttpResponse:
logging.info("START: get_cars function invoked")
try:
conn = get_connection()
logging.info("Connected to DB")
cursor = conn.cursor()
cursor.execute("SELECT Make, Model, Year, Price FROM Cars")
rows = cursor.fetchall()
result = [{"make": r[0], "model": r[1], "year": r[2], "price": float(r[3])} for r in rows]
return func.HttpResponse(str(result), mimetype="application/json")
except Exception as e:
logging.error(f"ERROR in get_cars: {str(e)}")
return func.HttpResponse(f"ERROR in get_cars: {str(e)}", status_code=500)
This file contains the core logic of the function: handling incoming HTTP requests, connecting to the database, querying car records, and formatting the response.
import azure.functions as func
import logging
from shared.db import get_connection
azure.functions: The Azure Functions SDK to define HTTP triggers and responses.logging: Logs useful info and errors for debugging in Azure.get_connection: Reuses the DB connection helper fromshared/db.py.
def main(req: func.HttpRequest) -> func.HttpResponse:
logging.info("START: get_cars function invoked")
This is the entry point for the function. Azure automatically calls this method and first logs that the function has been triggered.
conn = get_connection()
logging.info("Connected to DB")
cursor = conn.cursor()
cursor.execute("SELECT Make, Model, Year, Price FROM Cars")
It then establishes a connection to the Azure SQL database and executes a SQL query to retrieve car details.
rows = cursor.fetchall()
result = [{"make": r[0], "model": r[1], "year": r[2], "price": float(r[3])} for r in rows]
- Processes the results into a list of dictionaries — one per car.
- Converts
pricefrom decimal to float for JSON compatibility.
return func.HttpResponse(str(result), mimetype="application/json")
- Returns the list as a JSON HTTP response.
except Exception as e:
logging.error(f"ERROR in get_cars: {str(e)}")
return func.HttpResponse(f"ERROR in get_cars: {str(e)}", status_code=500)
- This block catches and logs any runtime errors, then returns an HTTP 500 error.
The get_cars/function.json file tells Azure how the function is triggered and how data flows in and out. Open the file and add this configuration:
{
"bindings": [
{
"authLevel": "anonymous",
"type": "httpTrigger",
"direction": "in",
"name": "req",
"methods": ["get"],
"route": "get_cars"
},
{
"type": "http",
"direction": "out",
"name": "$return"
}
]
}
"authLevel": "anonymous": Allows public access without authentication."methods": ["get"]": Accepts onlyGETrequests."route": "get_cars": Endpoint will be accessible via/api/get_cars."type": "http": Indicates the function returns an HTTP response."name": "$return": Maps to the value returned by the Python function.
Note: The authLevel: "anonymous" setting allows the function to be called without any authentication. That is fine for local development or demos, but in production, you should use function or admin level authentication to protect your endpoints from unauthorized access. Learn more about Azure Function authentication levels.
The get_salesmen function follows the same structure and logic as get_cars, with the key difference being that it retrieves data from the Salesmen table in the database.
It is also triggered by an HTTP GET request and returns a list of salesmen with their names and emails.
get_salesmen/__init__.py:
import azure.functions as func
import logging
from shared.db import get_connection
def main(req: func.HttpRequest) -> func.HttpResponse:
logging.info("START: get_salesmen function invoked")
try:
conn = get_connection()
logging.info("Connected to DB")
cursor = conn.cursor()
cursor.execute("SELECT Name, Email FROM Salesmen")
rows = cursor.fetchall()
result = [{"name": r[0], "email": r[1]} for r in rows]
return func.HttpResponse(str(result), mimetype="application/json")
except Exception as e:
logging.error(f"ERROR in get_salesmen: {str(e)}")
return func.HttpResponse(f"ERROR in get_salesmen: {str(e)}", status_code=500)
get_salesmen/function.json:
{
"bindings": [
{
"authLevel": "anonymous",
"type": "httpTrigger",
"direction": "in",
"name": "req",
"methods": ["get"],
"route": "get_salesmen"
},
{
"type": "http",
"direction": "out",
"name": "$return"
}
]
}
Automate deployment with CircleCI
After building the function app, you can start automating. In this section, you’ll configure CircleCI to automatically deploy your Azure Function app whenever you push new code.
You will configure a CI/CD pipeline using CircleCI. This setup ensures that every time you push to the main branch, CircleCI automatically builds and deploys the serverless app to Azure. You’ll define the pipeline in the .circleci/config.yml file. That file contains all the jobs, steps, and logic needed for a successful deployment.
Note: You can use any branch you prefer. Although we’re using the main branch in this example, you can use a different branch name. Just make sure to use the real branch name in the .circleci/config.yml file.
Go to the .circleci directory in your project, and open config.yml. Paste this configuration:
version: 2.1
orbs:
node: circleci/node@7.1.0
azure-cli: circleci/azure-cli@1.2.0
jobs:
deploy:
docker:
- image: cimg/python:3.13.5
steps:
- checkout
- node/install:
node-version: '18.17'
- azure-cli/install
- run:
name: Install Python dependencies
command: pip install -r requirements.txt
- run:
name: Azure CLI login
command: |
az login --service-principal \
-u $AZURE_CLIENT_ID \
-p $AZURE_CLIENT_SECRET \
--tenant $AZURE_TENANT_ID
- run:
name: Register Microsoft.Storage and Microsoft.Web provider
command: |
echo "Registering Microsoft.Storage resource provider..."
az provider register --namespace Microsoft.Storage
echo "Registering Microsoft.Web resource provider..."
az provider register --name Microsoft.Web
- run:
name: Ensure Function App + Storage Account exist
command: |
echo "Checking storage account..."
if ! az storage account show --name "$AZURE_STORAGE_ACCOUNT" --resource-group "$AZURE_RESOURCE_GROUP_NAME" &>/dev/null; then
echo "Storage account not found. Creating..."
az storage account create \
--name "$AZURE_STORAGE_ACCOUNT" \
--location "$AZURE_LOCATION" \
--resource-group "$AZURE_RESOURCE_GROUP_NAME" \
--sku Standard_LRS
else
echo "Storage account already exists."
fi
echo "Checking function app..."
if ! az functionapp show --name "$AZURE_FUNCTION_APP_NAME" --resource-group "$AZURE_RESOURCE_GROUP_NAME" &>/dev/null; then
echo "Function App not found. Creating..."
az functionapp create \
--name "$AZURE_FUNCTION_APP_NAME" \
--storage-account "$AZURE_STORAGE_ACCOUNT" \
--resource-group "$AZURE_RESOURCE_GROUP_NAME" \
--consumption-plan-location "$AZURE_LOCATION" \
--runtime python \
--functions-version 4 \
--os-type Linux
else
echo "Function App already exists."
fi
- run:
name: Enforce correct Python version
command: |
az functionapp config set \
--name "$AZURE_FUNCTION_APP_NAME" \
--resource-group "$AZURE_RESOURCE_GROUP_NAME" \
--linux-fx-version "PYTHON|3.10"
- run:
name: Deploy Azure Function App
command: |
npm install -g azure-functions-core-tools@4 --unsafe-perm true
func azure functionapp publish "$AZURE_FUNCTION_APP_NAME" --python --debug
workflows:
deploy_main:
jobs:
- deploy:
filters:
branches:
only: master
This configuration defines a single deploy job that runs in a Python 3.13 Docker image. It uses two CircleCI orbs:
- The
nodeorb to install Node.js, required for Azure Functions Core Tools. - The
azure-cliorb to run Azure CLI commands inside the container.
Azure CLI log in (az login --service-principal)
This command logs into Azure using a service principal, which is a secure, app-based identity. The credentials are passed in as environment variables (which we will add shortly):
$AZURE_CLIENT_ID: The app/client ID of the service principal.$AZURE_CLIENT_SECRET: The secret/password for that app.$AZURE_TENANT_ID: The directory/tenant ID where the service principal lives.
Register resource providers
Before creating services like Function Apps or Storage Accounts, Azure needs the corresponding resource providers to be registered:
az provider register --namespace Microsoft.Storageregisters theStorageresource provider, which manages blob containers, file shares, and storage accounts.az provider register --name Microsoft.Webregisters theWebprovider, which is required for deploying Function Apps, App Services, and Web Apps.
Creating the storage account
The script first checks that the storage account exists:
az storage account show --name "$AZURE_STORAGE_ACCOUNT" --resource-group "$AZURE_RESOURCE_GROUP_NAME"
If it does not exist, this command creates it:
az storage account create \
--name "$AZURE_STORAGE_ACCOUNT" \
--location "$AZURE_LOCATION" \
--resource-group "$AZURE_RESOURCE_GROUP_NAME" \
--sku Standard_LRS
--sku Standard_LRScreates a locally redundant storage (LRS) account, which is the most cost-effective and default redundancy option.--location: This should match the region where your Function App will live.
Note: The storage account is required by Azure Functions to store internal metadata, logs, and scaling data.
Create the Azure Function app
Next, the script checks for an existing Function App:
az functionapp show --name "$AZURE_FUNCTION_APP_NAME" --resource-group "$AZURE_RESOURCE_GROUP_NAME"
If it’s missing, it’s created using:
az functionapp create \
--name "$AZURE_FUNCTION_APP_NAME" \
--storage-account "$AZURE_STORAGE_ACCOUNT" \
--resource-group "$AZURE_RESOURCE_GROUP_NAME" \
--consumption-plan-location "$AZURE_LOCATION" \
--runtime python \
--functions-version 4 \
--os-type Linux
--runtime python: Specifies Python as the language runtime.--functions-version 4: Targets version 4 of the Azure Functions runtime.--consumption-plan-location: Deploys to the serverless consumption plan (autoscaling and pay-per-use).--os-type Linux: Uses a Linux-based hosting environment which supports Python.
Enforcing the correct Python version
Even though a Python runtime is specified at creation, Azure may default to an older version. To ensure the Function App uses Python 3.10, run:
az functionapp config set \
--name "$AZURE_FUNCTION_APP_NAME" \
--resource-group "$AZURE_RESOURCE_GROUP_NAME" \
--linux-fx-version "PYTHON|3.10"
This avoids runtime mismatch issues, especially if your code depends on Python 3.10+ features.
Publish the function
The final step installs the Azure Functions Core Tools and runs the publish command:
npm install -g azure-functions-core-tools@4 --unsafe-perm true
func azure functionapp publish "$AZURE_FUNCTION_APP_NAME" --python --debug
- This pushes your local code to the live Function App in Azure.
- The
--debugflag increases verbosity, which is helpful for CI logs. --pythonensures the correct language runtime is applied during deployment.
Create the Git repository and push it to GitHub
Initialize Git in your project and connect it to a GitHub repository.
Initialize a local Git repository:
git init
This starts version control in your current project directory.
Stage your files:
git add .
Commit your changes:
git commit -m "Initial commit"
Go to GitHub and create a new repository. You can leave it empty (no README or .gitignore).
Copy the repository URL (https://github.com/your-username/your-repo.git).
Go back to your terminal, add the remote URL, and push your code:
git remote add origin https://github.com/your-username/your-repo.git
git branch -M master
git push -u origin master
Configure CircleCI
Head on to your CircleCI dashboard and create a new project. You will need to add some environment variables.
Set up environment variables in CircleCI
Before the deployment pipeline can work, you’ll need to securely add a few environment variables to CircleCI. These will provide the credentials and configuration parameters used by the config.yml file.
Here’s a list of the variables you’ll need to define in your CircleCI project settings:
AZURE_CLIENT_IDAZURE_CLIENT_SECRETAZURE_TENANT_IDAZURE_FUNCTION_APP_NAMEAZURE_STORAGE_ACCOUNTAZURE_RESOURCE_GROUP_NAMEAZURE_LOCATION
Setting the values
AZURE_RESOURCE_GROUP_NAME: Use the same resource group name that you created earlier when setting up the Azure SQL Database and Function App locally.AZURE_LOCATION: This should also match the region you’ve used locally (e.g.,eastus,westeurope, etc.).AZURE_STORAGE_ACCOUNTandAZURE_FUNCTION_APP_NAME: You can name these with the name of your choice. Make sure the names are between 3 and 24 characters, composed of numbers and lower-case letters only, and globally unique.
Get AZURE_CLIENT_SECRET and AZURE_TENANT_ID
These values come from a special Azure Service Principal, a non-human identity used to authenticate and perform actions in your Azure subscription from CI/CD tools.
Create a Service Principal by running:
az ad sp create-for-rbac --name azurefndeployer --sdk-auth
This command creats a new service principal named azurefndeployer with the necessary permissions. It returns a JSON object similar to:
{
"clientId": "xxxxxx",
"clientSecret": "xxxxxx",
"subscriptionId": "xxxxxx",
"tenantId": "xxxxxx",
...
}
Assign the value of:
"clientSecret"toAZURE_CLIENT_SECRET."tenantId"toAZURE_TENANT_ID."clientId"toAZURE_CLIENT_ID.
Note: You may also want to make a note of the "clientId" and "subscriptionId" values for the next step.
Grant permissions to the service principal
Next, assign the Owner role to this service principal so it has sufficient rights to create and manage Azure resources like Function Apps and Storage Accounts.
Run:
az role assignment create \
--assignee "<CLIENT-ID>" \
--role "Owner" \
--scope "/subscriptions/<SUBSCRIPTION-ID>"
- Replace
<CLIENT-ID>with the value of"clientId". - Replace
<SUBSCRIPTION-ID>with the value of"subscriptionId".
This grants the service principal full control over your Azure subscription and ensures that your CircleCI deployment process won’t hit permission issues.
Now, trigger the pipeline manually. If everything goes well, there will be a green badge labeled Success and a deploy job. Click the deploy job and to expand the steps for details.
For the last step, there should be something like this text in the output:
Functions in :
get_cars - [httpTrigger]
Invoke url: https://xxxx.azurewebsites.net/api/get_cars
get_salesmen - [httpTrigger]
Invoke url: https://xxxx.azurewebsites.net/api/get_salesmen
Testing the live function
Before you can call your APIs and retrieve data, make sure the function app knows how to talk to the database. Go into the Azure portal and add the SQL connection string to the app settings. Retrieve the connection string you modified earlier and keep it nearby.
Before your function app can retrieve data from the Azure SQL database, you need to securely add the connection string to its configuration. This tells the app exactly how to connect to the database when handling HTTP requests.
Here’s how to do it:
- Go to the Azure Portal and browse resource groups. This opens a list of your Azure resource groups.
- Click the resource group you created earlier when provisioning your SQL Server and Function App.
- Inside the resource group, find and click your Function App.
When the app loads, your two deployed functions (get_cars and get_salesmen) should be listed toward the bottom of the overview.
- On the left navigation pane, expand the Settings section. Then click Environment variables.
- Switch from the default App settings tab to the Connection strings tab.
- Click the + Add button. A form with three inputs will appear:
- In the Name field, type
SQL_CONNECTION_STRING. - In the Value field, paste the full connection string you generated earlier.
- In the Type field, click SQLServer
- In the Name field, type
-
Check the box for Deployment slot setting. This ensures your connection string persists across deployments and slots.
-
Click Apply. Azure will restart the Function App and apply the new configuration. When it is restarted, your function app will have access to the database, and you’ll be ready to test it live using
curl.
Testing the live Azure Functions
You can use simple curl commands to hit the live HTTP APIs you deployed: one to retrieve cars and another to fetch salesmen.
Retrieve car listings by running:
curl --location 'https://<appname>.azurewebsites.net/api/get_cars'
- Replace
<appname>with your actual Azure Function App name.
If everything is set up correctly, your response should be something like:
[
{"make": "Toyota", "model": "Camry", "year": 2020, "price": 25000.0},
{"make": "Ford", "model": "Mustang", "year": 2018, "price": 30000.0},
{"make": "Tesla", "model": "Model 3", "year": 2022, "price": 45000.0}
]
This confirms that your Azure Function successfully queried the database and returned car details in JSON format.
Retrieve Salesmen details by running:
curl --location 'https://<appname>.azurewebsites.net/api/get_salesmen'
Your expected output:
[
{"name": "Alice Johnson", "email": "alice@caryard.com"},
{"name": "Bob Smith", "email": "bob@caryard.com"}
]
This output confirms a successful connection to your database and proper execution of your deployed function.
Get the complete code in this GitHub repository.
Conclusion
In this tutorial you:
- Built a simple Azure Function App using Python
- Connected it to an Azure SQL Database
- Deployed it using CircleCI with secure environment variables
- Tested live endpoints using
curl
You now have a functional CI/CD pipeline running on CircleCI, deploying code to Azure with each push.
To take this project further, consider expanding on what you’ve learned:
- Use tools like Flyway or Liquibase to version and automate schema changes. This adds reliability and traceability to your database evolution.
- Test your endpoints post-deployment to ensure nothing breaks unexpectedly. Tools like
pytest,requests, or Postman CLI can be used to hit your live endpoints after each deployment. - For production-ready APIs, move away from
authLevel: anonymousand consider wrapping your functions with Azure API Management, which provides robust authentication, rate limiting, and monitoring.
If you haven’t tried CircleCI before, this is the perfect project to get started with. Experiment by pushing code changes, adding new functions, or even wiring up notifications and approvals in your workflows. Happy building!