Azure, Bicep, CI/CD

Feb 22, 2026 min read

Automated Builds and Deployments in Azure

Check out the Repo here: https://github.com/SkippySteve/AzureDevOps

The repo contains the source for everything: bicep, pipelines, and app (frontend+backend).

This is just a start; only the essentials for now. Feel free to take this an make it your own (GPL 3.0, so share accordingly)! Currently there is no dev environment, only prod, but creating a dev environment, and automating it’s build/deployment, would be a great next step.

The goal is to be able to deploy quickly for learning purposes, then destroy to save on Azure costs when not using.

I aim to use Bash when possible, rather than a custom AzureDevOps image that only works in ADO. I’d like to be able to take as much of this as possible and run it at home with Forgeo or even on Github, Codeberg etc.

Manual Setup

We need to create a Service Connection providing our Azure DevOps repos permissions to create resources and to perform actions within those resouces, such as pushing to our container registry. I gave the service connection:

  • User Access Administrator on the Subscription
  • Contributor on the RG
  • AcrPush on the Registry

Secrets can be manually added with Azure CLI, either on your own machine or with a cloud shell in the portal.

az keyvault secret set --vault-name <VaultID> --name secret-name --value "<Secret>"

Backend

We will start with the backend. It’s an old school-project consisting of a scikit-learn ML model strapped into a FastAPI server. It hadn’t been touched in over a year, so it needed some updating. The Dockerfile also needed some work, but mostly I just bumped the Python version; nothing very exciting to share.

When I ran this before, I had hardcoded the origin for CORS. Now it needs to be dynamic because our origin will change as we destroy and re-deploy to Azure.

FRONTEND_URL = os.getenv("FRONTEND_URL", "http://localhost:5173") # Allow localhost for local dev

app.add_middleware(
    CORSMiddleware,
    allow_origins=FRONTEND_URL,
    allow_credentials=True,
    allow_methods=["*"],
    allow_headers=["*"],
)

This is the part that lets me build images and push them to my Azure Container Registry automatically. Currently it only works when pushing to main, but that is because I haven’t built a dev environment, yet. It also contains a nasty hardcoded ACR name, which I haven’t yet figured out how to set dynamically. Please let me know if you have a good method!

If the first step, we simply login to ACR.

trigger:
- main
variables:
  acrName: 'acrxs344ph3injno'
  azureServiceConnection: 'pipeline-deploy-to-prod'
pool:
  vmImage: 'ubuntu-latest'
steps:
- task: AzureCLI@2
  displayName: Login to ACR
  inputs:
    azureSubscription: $(azureServiceConnection)
    scriptType: bash
    scriptLocation: inlineScript
    inlineScript: |
      az acr login --name $(acrName)
...

Now that we’re logged into the ACR, we can build and push our image.

...
- script: |
    IMAGE_TAG=$(Build.BuildId)
    echo "##vso[task.setvariable variable=imageTag]$IMAGE_TAG"
    echo $IMAGE_TAG > backend-image-tag.txt
    docker build -t $(acrName).azurecr.io/capstone-backend:$IMAGE_TAG .
    docker push $(acrName).azurecr.io/capstone-backend:$IMAGE_TAG
  displayName: Build & Push Backend
- publish: backend-image-tag.txt
  artifact: backend-image-tag

Currently I’m just using the default provided Pipeline variables to generate an image ID. I’m publishing the image tags as artifacts so we can use them when performing automated Bicep deployments.

Frontend

The frontend uses React and Tanstack Router. It also needed a bit of updating, especially with the nasty Remote Code Execution vulnerability found in React recently.

I used a simple shellscript to handle dynamically setting the Backend URL when the container starts.

function App() {
  const apiBaseUrl = "API_BASE_URL_PLACEHOLDER";
...
#!/bin/sh
# Search all JS files in the dist folder and replace the placeholder
find /app -name "*.js" -exec sed -i "s|API_BASE_URL_PLACEHOLDER|${VITE_API_BASE_URL}|g" {} +

# Run the original container entrypoint, which I overwrote with this script and moved into arguments
exec "$@"
...
COPY docker-entrypoint.sh /usr/local/bin/
RUN chmod +x /usr/local/bin/docker-entrypoint.sh
EXPOSE 3000
ENTRYPOINT ["docker-entrypoint.sh"]
CMD ["serve", "-s", ".", "-l", "3000"]

The pipeline for the Frontend build and push is basically identical to the Backend.

Infrastructure Pipeline (Part 1)

First lets look at the azure-pipelines.yml file. WGUCapstoneBackend is the name of a repo in the same Azure DevOps project. We can watch for pushes to specific branches, and kick off infrastructure deployment accordingly.

trigger: none   # Prevents infra pipeline from running on code pushes
resources:
  pipelines:
  - pipeline: backend
    source: WGUCapstoneBackend
    project: PipelineLab
    branch: main # Which branch in WGUCapstoneBackend to watch for pushes to
    trigger: true # Start infrastructure pipeline when main is pushed to
  - pipeline: frontend
    source: WGUCapstoneFrontend
    project: PipelineLab
    branch: main
    trigger: true

Setting variables to use later in the Pipeline definition. The service connection name I already had to set earlier, but the Resource Group name and location can be set here.

variables:
  azureServiceConnection: 'pipeline-deploy-to-prod'
  resourceGroupName: 'rg-prod'
  location: 'eastus'

First I download the artifacts from the Backend and Frontend pipelines

stages:
- stage: DeployInfra
  displayName: Deploy Infrastructure
  jobs:
  - job: Deploy
    displayName: Deploy Bicep Template
    pool:
      vmImage: 'ubuntu-latest'
    steps:
    # DOWNLOAD IMAGE TAG ARTIFACTS
    - download: backend
      artifact: backend-image-tag
    - download: frontend
      artifact: frontend-image-tag

Next we can take the image tag from the artifact and place it into a varible. $(Pipeline.Workspace) is an important piece to take note of! This was the best way I could find to locate the image tag artifacts we downloaded.

    - script: |
        echo "Reading backend image tag..."
        BACKEND_TAG=$(cat $(Pipeline.Workspace)/backend/backend-image-tag/backend-image-tag.txt)
        echo "##vso[task.setvariable variable=backendImageTag]$BACKEND_TAG"

        echo "Reading frontend image tag..."
        FRONTEND_TAG=$(cat $(Pipeline.Workspace)/frontend/frontend-image-tag/frontend-image-tag.txt)
        echo "##vso[task.setvariable variable=frontendImageTag]$FRONTEND_TAG"
      displayName: "Load image tags"

This is where the magic happens. We are calling our bicep script and passing in the image tags as parameters.

    - task: AzureResourceManagerTemplateDeployment@3
      displayName: "Deploy Bicep"
      inputs:
        deploymentScope: 'Resource Group'
        azureResourceManagerConnection: '$(azureServiceConnection)'
        action: 'Create Or Update Resource Group'
        resourceGroupName: '$(resourceGroupName)'
        location: '$(location)'
        templateLocation: 'Linked artifact'
        csmFile: 'infra/main.bicep'
        overrideParameters: '-backendImageTag "$(backendImageTag)" -frontendImageTag "$(frontendImageTag)"'
        deploymentMode: 'Incremental'
        deploymentName: prod

Infrastructure Bicep

This is where all of our infrastructure is defined. It need to be broken into modules, but I haven’t gotten that far yet. uniqueString is used to ensure we get a unique ID for our resources.

@description('The Azure region for all resources.')
param location string = resourceGroup().location

@description('The name of the Virtual Network.')
param vnetName string = 'vnet-prod'
// Parameters of image tags from azure-pipelines.yml declared but not defined
param backendImageTag string
param frontendImageTag string
param acrName string = 'acr${uniqueString(resourceGroup().id)}' // uniqueString ensures a globally unique name
param keyVaultName string = 'kv-${uniqueString(resourceGroup().id)}'

The network that Azure Container Apps is allowed to create and manage resources in.

resource vnet 'Microsoft.Network/virtualNetworks@2023-09-01' = {
  name: vnetName
  location: location
  tags: resourceTags
  properties: {
    addressSpace: {
      addressPrefixes: [
        '10.0.0.0/16'
      ]
    }
    subnets: [
      {
        name: 'snet-capstone-prod'
        properties: {
          addressPrefix: '10.0.0.0/23'
          delegations: [
            {
              name: 'cae-delegation'
              properties: {
                serviceName: 'Microsoft.App/environments'
              }
            }
          ]
        }
      }
    ]
  }
}

Creating our Container Registry.

resource acr 'Microsoft.ContainerRegistry/registries@2023-07-01' = {
  name: acrName
  location: location
  sku: {
    name: 'Basic'
  }
  properties: {
    adminUserEnabled: false
  }
}

Creating a User Assigned Managed Identity to allow pulling from the registry when deploying.

// Managed Identity (The "Passport")
resource userIdentity 'Microsoft.ManagedIdentity/userAssignedIdentities@2023-01-31' = {
  name: 'id-app-service-puller'
  location: location
}
// Role Assignment (Giving the Identity "Pull" permission on the Registry)
resource acrPullRole 'Microsoft.Authorization/roleAssignments@2022-04-01' = {
  name: guid(acr.id, userIdentity.id, 'AcrPull')
  scope: acr
  properties: {
    principalId: userIdentity.properties.principalId
    roleDefinitionId: subscriptionResourceId('Microsoft.Authorization/roleDefinitions', '7f951dda-4ed3-4680-a7ca-43fe172d538d')
    principalType: 'ServicePrincipal'
  }
}

Key Vault where secrets are stored until retrieved and turned into ENV variables in containers.

resource keyVault 'Microsoft.KeyVault/vaults@2023-02-01' = {
  name: keyVaultName
  location: location
  properties: {
    tenantId: tenant().tenantId
    sku: {
      family: 'A'
      name: 'standard'
    }
    accessPolicies: []
  }
}

This access policy is essential to allow our container to pull secrets from the Key Vault.

resource keyVaultAccess 'Microsoft.KeyVault/vaults/accessPolicies@2023-02-01' = {
  parent: keyVault
  name: 'add'
  properties: {
    accessPolicies: [
      {
        tenantId: tenant().tenantId
        objectId: userIdentity.properties.principalId
        permissions: {
          secrets: [
            'get'
            'list'
          ]
        }
      }
    ]
  }
}

Container App Environment where our Azure Container Apps will run.

resource containerAppEnv 'Microsoft.App/managedEnvironments@2023-05-01' = {
  name: 'cae-portfolio'
  location: location
  properties: {
    workloadProfiles: [
      {
        name: 'Consumption'
        workloadProfileType: 'Consumption'
      }
    ]
    vnetConfiguration: {
      infrastructureSubnetId: resourceId('Microsoft.Network/virtualNetworks/subnets', vnetName, 'snet-prod-backend')
    }
  }
}

Backend Bicep App

resource backendApp 'Microsoft.App/containerApps@2023-05-01' = {
  name: 'app-backend'
  location: location
  identity: {
    type: 'UserAssigned' 
    userAssignedIdentities: {
      '${userIdentity.id}': {} // Assign Managed Identity to Container App
    }
  }
  dependsOn: [
    keyVaultAccess // Wait for permissions to be active
    acrPullRole    // Ensure ACR pull is also ready
  ]
...
...
  properties: {
    managedEnvironmentId: containerAppEnv.id
    configuration: {
      ingress: {
        external: true // Our backend MUST be external, so the users can reach it using the API
        targetPort: 8000
      }
      registries: [
        {
          server: acr.properties.loginServer
          identity: userIdentity.id
        }
      ]
...

The Bicep linter doesn’t like that I’ve hardcoded the URL, but it works. If you know of a better way, please let me know!

...
      secrets: [
        {
          name: 'openai-api-key'
          keyVaultUrl: 'https://${keyVault.name}.vault.azure.net/secrets/openai-api-key'
          identity: userIdentity.id
        }
        {
          name: 'openai-api-url'
          keyVaultUrl: 'https://${keyVault.name}.vault.azure.net/secrets/openai-api-url'
          identity: userIdentity.id
        }
        {
          name: 'password'
          keyVaultUrl: 'https://${keyVault.name}.vault.azure.net/secrets/password'
          identity: userIdentity.id
        }
        {
          name: 'algo'
          keyVaultUrl: 'https://${keyVault.name}.vault.azure.net/secrets/algo'
          identity: userIdentity.id
        }
        {
          name: 'resend-api-key'
          keyVaultUrl: 'https://${keyVault.name}.vault.azure.net/secrets/resend-api-key'
          identity: userIdentity.id
        }
      ]
    }
...

Now we can assign the secrets we grabbed above into ENV variables in our containers.

...
    template: {
      containers: [
        {
          name: 'capstone-backend'
          image: '${acr.properties.loginServer}/capstone-backend:${backendImageTag}'
          env: [
            {
              name: 'OPENAI_API_KEY'
              secretRef: 'openai-api-key'
            }
            {
              name: 'OPENAI_API_URL'
              secretRef: 'openai-api-url'
            }
            {
              name: 'PASSWORD'
              secretRef: 'password'
            }
            {
              name: 'ALGO'
              secretRef: 'algo'
            }
            {
              name: 'RESEND_API_KEY'
              secretRef: 'resend-api-key'
            }
          ]
        }
      ]
...

Setting minReplicas to 1 to remove cold starts.

...
      scale: {
        minReplicas: 1
      }
    }
  }
}

Frontend Bicep App

Most of the frontend is identical to the backend. If you’re curious, please check out the repo in GitHub.

...
    template: {
      containers: [
        {
          name: 'capstone-frontend'
          image: '${acr.properties.loginServer}/capstone-frontend:${frontendImageTag}'
          env: [
            {
              name: 'VITE_API_BASE_URL'
              value: 'https://placeholder-backend-url' // This will need changed later...
            }
          ]
        }
      ]
...

The last part of the Bicep file outputs useful information from our deployment.

output vnetId string = vnet.id
output storageName string = storageAccount.name
output acrLoginServer string = acr.properties.loginServer
// These are needed in the next step
output frontendHostName string = frontendApp.properties.configuration.ingress.fqdn
output backendHostName string = backendApp.properties.configuration.ingress.fqdn

Infrastructure Pipeline (Part 2)

Finally, we need to patch our ENVs with the correct URLs, as we only get those after Bicep deployment completes.

- stage: PatchApps
  displayName: Patch Container Apps with Real URLs
  dependsOn: DeployInfra
  jobs:
  - job: Patch
    pool:
      vmImage: ubuntu-latest
...

The first part of our Patching step, we gather the outputs of our deployment, such as backendHostName, and store as an ENV variable.

steps:
    - task: AzureCLI@2
      name: FetchFqdns
      displayName: Patch Backend & Frontend
      inputs:
        azureSubscription: $(azureServiceConnection)
        scriptType: bash
        scriptLocation: inlineScript
        inlineScript: |
          BACKEND_FQDN=$(az deployment group show \
            --resource-group $(resourceGroupName) \
            --name prod \
            --query "properties.outputs.backendHostName.value" -o tsv)

Now we can patch our resource using the correct URL.

...
          echo "##vso[task.setvariable variable=backendFqdn;isOutput=true]$BACKEND_FQDN" 

          az containerapp update \
            --name app-frontend \
            --resource-group $(resourceGroupName) \
            --set-env-vars VITE_API_BASE_URL="https://$BACKEND_FQDN"
...

That should cover the entire process, from start to finish! It’s been an interesting project, and I can’t wait to share more CI/CD, containers, cloud native and cybersecurity content. If you enjoy this sort of thing, please connect with me on LinkedIn! That’s it for now, and congrats on making it to the end!