Azure Container Apps Example
This blog is all about Azure Container Apps Jobs. This blog describes a demo that I created using Azure Container Apps, that uses Azure Container Apps Jobs for background processing. It container both Event Triggered and Timer-Triggered Jobs. If you want to get familiar with working on Azure Container Apps Jobs and when to use them or, equally important, when not to use them... This is your story

I always refer to it as the new ‘Hello World’, a URL Minifier. If you want to get to learn a new technology or feature, a URL minifier is almost always suitable. You can make your project as small as you like, but also over-engineer it like crazy. This is my Azure Container Apps demo, a URL minifier, and yes…Over-engineered like crazy.

Introduction

Azure Container Apps really shine at doing microservices. While this term is polluted (see my previous post) these days, it is still a very popular way of creating loosely coupled services and if there is one thing I learned last year, Azure Container Apps is your friend creating them.

Not so long ago, I wrote a blog about Azure Container Apps Jobs. To me, they felt like (and still do for some part) Azure Functions. The idea is that you can spin up a container triggered manually, at a time schedule, or event-driven. The code in your container runs and your container is destroyed.

I always (always!!) like to try new things and put them in a side-project. Sometimes this turns out to be a fully functional demo project, often it doesn’t.

For this demo, I went for a re-vamp of the URL Minifier. The basic idea is that when you have a really long URL that is not convenient for sharing, you drag it through the minifier so it becomes a very short URL, convenient for sharing. A very simple, straightforward software system, but the fun is that you can over-engineer it like crazy. Think about collecting clicks, making accumulations, and maybe even predictions or sprinkling some AI here and there. TinyLink is born…

So the the base is a service that I call, the ShortLinks service. This service allows you to post a long URL, and it will generate a unique code (a ShortCode) for you that will be the reference of your link. Obviously, I needed to register a domain (tinylnk.nl) that is used to generate URLs. So then the ShortLinks service returns https://tinylnk.nl/{shortcode}. And when people navigate to that URL, they will be redirected to the originally entered long URL. So this is where the source code of the ShortLinks service lives.

This is pretty much a straightforward ASP.NET API Service so not very much of interest to go into in detail. What is important to understand though, is that each and every service will, in the CI/CD process be compiled into a container image. This image is then uploaded to a container registry for use at a later stage.

Deployment

The CI/CD workflow is a GitHub Actions workflow. The workflow uses GitVersion to do semantic versioning using the information in the Git repository. Then it compiles into a docker container that will be pushed to a container registry in Azure.

Infrastructure

In the meanwhile, a Bicep template is transpiled to an ARM template and eventually creates a deployment in Azure. This deployment creates an Azure Container App in Azure that will download the container image from the container registry and run it. That’s it for the ShortLinks service.

Let’s make it more interesting

The service allows for creating and resolving short links. But, when a short link is resolved to its original URL, a message is posted on an Azure Service Bus topic. This topic has two subscriptions that both forward a message to a queue. So now we need some mechanism that can process those queue messages. Because this is an Azure Container Apps demo, let’s go and create a Container Apps Job.

Container Apps Jobs

I have already written some content about Container Apps Jobs, time to demo some more. So there are two queues receiving messages from a topic subscription in Azure Service Bus. These messages are handled by somewhat similar processes that both pick the message from the queue and store it in a Table Storage.

This workload works fine in an Azure Container Apps job, but this is not necessarily the true power of an ACA Job. Container Apps Jobs really start to shine when you need to run background services that exceed the timeout of Azure Functions, but don’t need the orchestration of Durable Functions. So this job (just grab a message from a queue and store it somewhere) is a little bit small of a workload for Container Apps Jobs, but for the purpose of this demo will do.

The code below is a (simplified) version of the job as it runs in my production environment:

var sourceQueueName = Environment.GetEnvironmentVariable("QueueName");
var serviceBusConnectionString = Environment.GetEnvironmentVariable("ServiceBusConnection");
var storageAccountName = Environment.GetEnvironmentVariable("StorageAccountName");
var tableName = Environment.GetEnvironmentVariable("StorageTableName");

var serviceBusClient = new ServiceBusClient(serviceBusConnectionString);
var receiver = serviceBusClient.CreateReceiver(sourceQueueName);
var receivedMessage = await receiver.ReceiveMessageAsync();

if (receivedMessage != null)
{
    var payloadString = Encoding.UTF8.GetString(receivedMessage.Body);
    var payload = JsonConvert.DeserializeObject<ProcessHitCommand>(payloadString);
    if (payload != null)
    {
        var identity = new ManagedIdentityCredential();
        var storageAccountUrl = new Uri($"https://{storageAccountName}.table.core.windows.net");
        var tableClient = new TableClient(storageAccountUrl, tableName, identity);

        var voteEntity = new HitTableEntity
        {
            PartitionKey = "hit",
            RowKey = Guid.NewGuid().ToString(),
            ShortCode = payload.ShortCode,
            OwnerId = payload.OwnerId,
            Hits = 1,
            Timestamp = payload.CreatedOn,
            ETag = ETag.All
        };
        var response = await tableClient.UpsertEntityAsync(voteEntity);
        if (!response.IsError)
            Console.WriteLine("Completing original message in service bus");
            await receiver.CompleteMessageAsync(receivedMessage);
            Console.WriteLine("All good, process complete");
        }
    }
}

The code above first sets some variable values that come in from configuration. Then it receives a message from a Service Bus queue and then stores it as a Table Entity in Table Storage. Finally, the message is completed on the Service Bus to indicate it is handled successfully.

Creating a job

The code above will be compiled in to a container image and also pushed to the container registry. Then with my infrastructure as code deployment, I deploy it as an Azure Container Apps Job. The Bicep snippet below is the most important part of my infra as code deployment for the ACA Job:

resource hitsProcessorJob 'Microsoft.App/jobs@2023-05-01' = {
  name: 'tinylnk-jobs-hitscalc-processor'
  location: location
  identity: {
    type: 'SystemAssigned'
  }
  properties: {
    environmentId: containerAppEnvironment.id
    configuration: {
      secrets: [
        {
          name: 'servicebus-connection-string'
          value: serviceBusConnectionString
        }
        {
          name: 'container-registry-secret'
          value: containerRegistry.listCredentials().passwords[0].value
        }
      ]
      replicaTimeout: 60
      replicaRetryLimit: 1
      triggerType: 'Event'
      eventTriggerConfig: {
        replicaCompletionCount: 1
        parallelism: 1
        scale: {
          minExecutions: 0
          maxExecutions: 100
          pollingInterval: 30
          rules: [
            {
              name: 'azure-servicebus-queue-rule'
              type: 'azure-servicebus'
              metadata: any(
                {
                  queueName: serviceBus::queue.name
                  connection: 'servicebus-connection-string'
                }
              )
              auth: [
                {
                  secretRef: 'servicebus-connection-string'
                  triggerParameter: 'connection'
                }
              ]
            }
          ]
        }
      }
      registries: [
        {
          server: containerRegistry.properties.loginServer
          username: containerRegistry.name
          passwordSecretRef: 'container-registry-secret'
        }
      ]
    }
    template: {
      containers: [
        {
          image: '${containerRegistry.properties.loginServer}/tinylnk-jobs-hitscalcprocessor:${containerVersion}'
          name: 'hits-calc-processor'
          env: [
            {
              name: 'ServiceBusConnection'
              secretRef: 'servicebus-connection-string'
            }
            {
              name: 'QueueName'
              value: serviceBus::queue.name
            }
            {
              name: 'StorageAccountName'
              value: hitsStorageAccount.name
            }
            {
              name: 'StorageTableName'
              value: hitsStorageAccount::hitsTableStorageService::table.name
            }
          ]
          resources: {
            cpu: json('0.25')
            memory: '0.5Gi'
          }
        }
      ]
    }
  }
}

Let’s walk through some important parts of this Bicep snippet. First, the name will be the name of the Azure Container Apps Job resource as it would be named in Azure. A system-assigned identity is assigned to the resource and the ACA Job is assigned to an (already existing) Azure Container Apps Environment. Then I set a couple of secrets needed in the container for pulling the image from a container registry and connecting to the Service Bus.

Then the template sets some properties of the Container Apps Job, like replica timeout, retry limit, and so forth. These settings are also explained in my previous post about ACA Jobs.

Then some information about the trigger. Every ACA Job needs a trigger. Either a manual, a timer, or an event-driven trigger. In this case, I use the event-driven trigger and hook the trigger on my Azure Service Bus queue. KEDA will under the hood take care of scaling the ACA Job so it scales up if the number of messages on the queue starts to pile up.

Finally some container registry and container information and settings to pull the container from a registry and configure the container with the values it needs.

The secrets used in the template above may be removed and replaced by taking advantage of the Managed Identity of the Job, but I have not tested that yet.

Running the Container Apps Job

Now if you replicated this environment and tried running the container, you may see that if you receive a larger amount of hits, the Container Apps Job doesn’t seem to handle the messages well and at a pace that you would expect compared to (for example) when running this workload with Azure Functions. The reason for this is mostly because Azure Functions start much faster and scale differently (and also grab messages from queues more efficiently). Again, this is because the workload of this example is too small for an ACA Job.

When a job needs to run, it pulls a container from a container registry, starts the container, and then runs your code, before destroying the container again. This is, especially compared to Azure Functions, very time-consuming.

A better example

This is why I implemented a different job. This job grabs all hits that came in at a certain timeframe calculates averages and accumulates hits over chunks (10 minutes) of time. This to (at a later stage) show charts of when hits on a shortlink came in. The code for this accumulation process can be found here and is (again) a fairly straight-forward approach in fetching hits, doing some calculations, and storing the outcome. But because of the amount of data handled and the calculations it contains, this task is way more suitable to run as an ACA Job.

This job is triggered using a timer. To create a timer-triggered ACA Job, you need to set the triggerType property in Bicep to Schedule and add a scheduleTriggerConfig properties object instead of an eventTriggerConfig. The bicep looks like so:

triggerType: 'Schedule'
scheduleTriggerConfig: {
  parallelism: 1
  replicaCompletionCount: 1
  cronExpression: '*/10 * * * *'
}

The CRON expression above describes a timer interval of 10 minutes. So every 10 minutes, ACA Jobs will pull and image from my container registry, run the container, and destroy it again.

This project

Although this project is not cutting edge in terms of functionality, the technology used is fairly new and really fun to play with. Obviously, this project is over-engineered for the purpose of this demo and I will add even more functions tot are totally not required, but fun to add. Don’t get me wrong though, the Azure Container Apps service and the Jobs that come with Container Apps are very much serious services that you can work with in serious production environments.

The project is under construction so the architecture and some implementations may evolve over time. The entire project is hosted on GitHub in several different repositories. The integration project contains the base cloud enviroment (Container Apps Environment, logging, instrumentation and so forth). The API Project contains the API that allows for creating short links and and resolving short links. The hits repository is responsible for collecting and accumulating and then expose hits data.


Last modified on 2023-09-06

Hi, my name is Eduard Keilholz. I'm a Microsoft developer working at 4DotNet in The Netherlands. I like to speak at conferences about all and nothing, mostly Azure (or other cloud) related topics.