0 Comments

So it’s 2019 now, almost 2020. More and more companies start migrating solutions to the cloud. In my profession, building cloud and web solutions, I see that companies start migrating their solutions to different architectures, more suitable for cloud environments. Classic web systems are replaced by serverless systems and huge IT systems are thorn apart into microservices. Since Azure Functions became mature, they’re a really good replacement for classic ASP.NET Web solutions running on a Web App or on IIS. So I started using these serverless solutions more and more. Because I also like to create user-friendly software I often use the SignalR real-time framework to notify the user of processes going on on the server. For example, when sending a command to a serverless function, you may want to inform the user whether processing that command was successful (or not). In the past, you required a web host to run SignalR, but running a web host in the cloud is relatively expensive. Today, SignalR is one of the native cloud services Azure can deliver. During this blog, I’m going to implement this SignalR service.

The demo project

Developers always create demo projects to try something new. The idea is great, but there’s never time to finish the project and so it lands in the trash can somewhere between now and 5 years. So for this blog, to show how the SignalR Service works, I… Yes… created a demo project. It’s an Angular front-end project uploading images to Azure Blob Storage. An Azure Function will be triggered by the blob creation and start resizing the image into two versions, a thumbnail and a fairly decent web size (1024 x 768). Image references are stored in Azure Table Storage and once both images are sized correctly, the state of the image in Table Storage will be set to available. Then a message will be broadcasted using SignalR, which enables the front-end system to respond. Pretty awesome, you could also use this exact same scenario for example when importing data. Just upload the data, make a function that imports, and report status through SignalR.

signalr-service-createSo first I navigated to the Azure Portal and started creating a SignalR Service.

Now when the Azure created the resource, navigate to the newly created SignalR Services and open the Keys blade. Here you’ll find two keys and two connection strings. Copy one of the connection strings, you’re going to need that one in the Azure Functions project. Then navigate to the CORS blade and validate if there’s an allowed origin *. If not, add it. You may want to change this to a valid endpoint once your system goes to production, but for this demo, you’ll be fine. Please note, that I selected Serverless as a ServiceMode. This mode should only be selected when you use SignalR from an Azure Functions project.

Next Up, The Functions project

Now open Visual Studio, I used VS 2019 (16.3.18) and Azure Functions v2. Create a new Azure Functions project and see if your project contains a local.settings.json file. If not, create it and add the copied Connection String value as a setting called ‘AzureSignalRConnectionString’. Your local.settings.json looks like this (or something similar):

{
  "IsEncrypted": false,
  "Values": {
    "AzureWebJobsStorage": "UseDevelopmentStorage=true",
    "AzureSignalRConnectionString": "Endpoint=https://your-signalr.service.signalr.net;AccessKey=/--secred-access-key-here--/;Version=1.0;"
  },
  "Host": {
    "LocalHttpPort": 7071,
    "CORS": "http://localhost:4200",
    "CORSCredentials": true
  }
}

The Angular client makes HTTP requests to the negotiate function to initiate the connection negotiation. When the client application is hosted on a different domain than the Azure Function app, cross-origin resource sharing (CORS) must be enabled on the Function app or the browser will block the requests. This is why I also added some CORS settings in the settings file. I know my Angular client is going to run on localhost port 4200. Once again, you may want to change these settings once you go to production.

As you all know, an Azure Function is fired by a trigger and may use bindings (input and/or output) to use external data or services, or send data to external services. We’re going to use a SignalR Output Binding which means we send data out to the SignalR Service. This data fires an event on the client which can be handled accordingly. The bindings for the SignalR Service can be installed by adding a NuGet package to your project. Look for the packed called Microsoft.Azure.WebJobs.Extensions.SignalRService, My project used version 1.0.2, just so you know.

Now it’s time to implement the negotiate endpoint. SignalR uses this endpoint to initiate a connection and determine server and client capabilities. In your Azure Functions project, create a new endpoint with an HTTP trigger and which looks like this:

[FunctionName("negotiate")]
public static SignalRConnectionInfo SignalRNegotiate(
    [HttpTrigger(AuthorizationLevel.Anonymous, "post")]  HttpRequestMessage req,
    [SignalRConnectionInfo(HubName = "notifications")] SignalRConnectionInfo connectionInfo)
{
    return connectionInfo;
}

That's pretty much all there is to it. This endpoint allows you to connect to the SignalR service. Connecting to this endpoint redirects to your SignalR Service, which in turn returns its capabilities (like available transport types and so).

I explained I persist a reference to uploaded pictures in table storage. Once a file is uploaded and successfully scaled, I send a command on a queue that sets an availability flag on the picture entity in table storage. When the table entity is successfully updated, I send a message through the SignalR Service.

The function looks like so (I stripped code which doesn’t add value for this demo):

[FunctionName("PictureStatusCommandQueueHandler")]
public static async Task PictureStatusCommandQueueHandler(
    [QueueTrigger(Constants.QueueNamePictureStatusCommands)] string pictureStatusCommandMessage,
    [Table(TableNames.Pictures)] CloudTable picturesTable,
    [SignalR(HubName = SignalRHubNames.NotificationsHub)] IAsyncCollector signalRMessages,
    ILogger log)
{
    log.LogInformation("Picture status command retrieved");
    SetStorageConsumptionCommand consumptionCommand = null;
    ...
    if (...)
    {

	...
        Update the table entity here
        ...

        var pictureDto = new PictureDto
        {
            CreatedOn = entity.Timestamp,
            Id = Guid.Parse(entity.RowKey),
            Name = entity.Name,
            Url = picturesTable.ServiceClient.BaseUri.ToString()
        };
        await signalRMessages.AddAsync(
            new SignalRMessage
            {
                Target = "newPicture",
                Arguments = new object[] { pictureDto }
            });
        }
    }
    return consumptionCommand;
}

So what happens here is basically that I create a Data Transfer Object (DTO), which I want to push to the client, and I happen to use SignalR as a mechanism to do that for me. The DTO will be converted to JSON and passed to the client. The Target here (newPicture) is the event that will be raised client side, and the arguments can be seen as the payload of that message.

The Angular project

Before we run into a discussion that doesn’t make sense… I’m a cloud solution architect and I really like C# and the Microsoft development stack. I also have a strong affinity with Angular. Because I use Angular as a demo project doesn’t mean it’s the best solution. Vue, React and all other frameworks/component libraries work fine! So I created this Angular project and inside that project created a service. This service uses the @aspnet/signalr package so you need to install that. For your information, my demo project used version 1.1.4.

npm i @aspnet/signalr

or yarn if you like

yarn add @aspnet/signalr

Now back to the service, since the service is quite large, I created a Github Gist here. The service contains a connect and a disconnect function. The endpoint to connect to is your Azure Functions project URL http://{az-functions-project}/api

By connecting to that location, the SignalR client will send a post request to the negotiate endpoint of your Azure Functions project, and the SignalR service does the rest for you.

Now if your scroll down to line 22 of the gist, you see this code:

this.connection.on('newPicture', (dto: PictureDto) => {
    console.log('New picture arrived');
});

This fragment subscribes to the ‘newPicture’ event. Remember the Azure Function in which we send a message with a target ‘newPicture’? Well, this is the event handler on the client handling that event. In this case, a message is written to the browser’s console, but you also see the dto of type PictureDto, which contains the actual information about the image as it was passed by the Azure Function.

Now create a component that consumes the realtime service and calls the service’s connectSignalR() function and you’re good to go!!

I have quite some history with SignalR, so I expected a very complicated solution. It took me some time to figure out how the SignalR service is implemented, but mostly because I expected something difficult. The reality is that the SignalR Service integrates extremely well and lowers the complexity bar big time! Have fun experimenting!

0 Comments

Today, everyone is moving to the cloud with their software system. Personally I’m pretty much fan of Microsoft Azure. My job is to support companies migrating software systems to the cloud. What I see, is that a lot of companies and developers don’t really know how cloud solutions work and how you can make them work for you.

One system in Microsoft Azure is the Service Bus. It’s a messaging system that is designed for software systems, or software components to communicate with each other.

Now when you have a ASP.NET website running somewhere in a data center of choice and you want to move to the cloud (Azure), you simply create a Web App and just host the website as is. However, when your system is getting more and more load, you need to scale (up or out), which is fairly expensive. You can save lot of money by investigating why your system demands these resources and why scaling up or out is a requirement.

Often, there is just one single part of the website demanding these resources, while all the other parts are running just fine. A bank for example, the services for creating a new account, changing an address, or request a new debit card demand way less resources then for example the transactions service allowing money transfers. In such a case, it could be valuable to try and get the pressure off the transactions service, by distributing the work load. The Service Bus is an excellent native cloud service that will definitely help you and I’m going to explain how.

The basics of the Service Bus

So what is this Service Bus thing? Well, basically a very simple messaging mechanism. It contains Queues and Topics. The difference is that a Queue is, like it’s name assumes, a queue of messages. Each message will be delivered only once to any system reading from that queue. For example, when you make a bank transfer, you want that transfers to take place only once. So when multiple systems read from the queue, and a new message arrives, only one of those systems will receive the message. A topic can be compared to a newspaper, or your favorite magazine. Whoever has a subscription, gets the message as soon as it comes out. So if multiple systems have a subscription, to a certain message, the message will be delivered multiple times.

A tiny side-step to Microservices

In case you’re developing Microservices, you may need a messaging system to make sure you meet the eventual consistency requirement. Only one microservice will be responsible of manipulating a certain entity, but more services may need to receive an update of the changed entity. The Service Bus would be an excellent solution here, because you can easily broadcast the updated entity through a topic. All services that may need this update can subscribe to that certain message.

A practical example

So here we go, an example that makes sense. Let’s take the bank example, having a transactions service that demands a lot of resources because it’s drawing a lot of traffic, and a lot of validations are going on during each request. Therefore a good subject for change.

[HttpPost]
public async Task Post([FromBody] CreateTransactionDto dto)
{
    if (ModelState.IsValid)
    {
        dto.TransactionOn = DateTimeOffset.UtcNow;
        var messageBody = JsonConvert.SerializeObject(dto);
        var message = new Message(Encoding.UTF8.GetBytes(messageBody));
        await _queueClient.SendAsync(message);
        return Accepted(dto);
    }
    return BadRequest();
}

In the previous block of code, I removed all validations and ‘heavy’ stuff that demand a lot of resources. Usually when you create a bank transaction, a large amount of validations are required to make sure the transaction can actually take place. The only validation done here, is the ModelState validation. Next thing is creating a Service Bus message which is sent to a queue client. In this example I return an excepted HTTP response to indicate that I ‘accepted the request of creating a bank transaction’. The process of creating a bank transaction is now officially distributed, YESSSS!

Now, handling the message

Now I need a mechanism that handles the queue message and will actually create the bank transaction for me. I decided to create an Azure Function, because they’re fast, cheap and scale like a maniac. So this solution not only takes the pressure off the old web solution, but is also distributed in a system that behaves depending on the load and is thus pretty future proof.

[FunctionName("CreateTransaction")]
public static async void CreateTransaction(
    [ServiceBusTrigger("transactions", Connection = "AzureServiceBus")] string message,
    [ServiceBus("bank", Connection = "AzureServiceBus", EntityType = EntityType.Topic)] IAsyncCollector serviceBusTopic,
    [Table("transactions")] IAsyncCollector table,
    ILogger log)
{

    var transaction = JsonConvert.DeserializeObject(message);
    if (transaction != null)
    {
        if (transaction.Amount > 100)
        {
            var integrationEvent = new TransactionCreateFailedIntegrationEvent
            {
                Amount = transaction.Amount,
                FromAccountName = transaction.FromAccountHolder,
                ToAccountName = transaction.ToAccountHolder,
                Reason = "Maximum transaction amount is 100"
            };
            await SendServicebusMessage(integrationEvent, serviceBusTopic);
        }
        else
        {
            var transactionEntity = new TransactionEntity
            {
                PartitionKey = "transaction",
                RowKey = Guid.NewGuid().ToString(),
                FromAccountNumber = transaction.FromAccountNumber,
                FromAccountHolder = transaction.FromAccountHolder,
                ToAccountNumber = transaction.ToAccountNumber,
                ToAccountHolder = transaction.ToAccountHolder,
                Amount = transaction.Amount,
                Description = transaction.Description,
                TransactionOn = transaction.TransactionOn,
                Timestamp = DateTimeOffset.UtcNow
            };
            await table.AddAsync(transactionEntity);
            var integrationEvent = new TransactionCreatedIntegrationEvent
            {
                TransactionId = Guid.Parse( transactionEntity.RowKey),
                FromAccountName= transaction.FromAccountHolder,
                ToAccountName= transaction.ToAccountHolder,
                NewBalance = 3581.53M
            };
            await SendServicebusMessage(integrationEvent, serviceBusTopic);
        }
        await serviceBusTopic.FlushAsync();
    }
}

I know, it’s a large method which may need some refactoring in a production environment (or not), but for this demo works pretty fine. You can see I use the Service Bus Queue Trigger to fire the Azure Function. This way, each and every transaction is executed only once, by an instance of the Azure Function. I implemented a validation rule for demo purpose. The amount of the bank transaction cannot be greater than 100. If the transaction meets this validation rule it will be stored in table storage. When the validation fails, or succeeds, I create an integration event which will be sent to a Service Bus Topic. This mechanism allows me to notify the user what actually happened with the create bank transaction request.

Oh and by the way, the SendServiceBusMessage() function looks like this:

private static async Task SendServicebusMessage(T message, IAsyncCollector serviceBusTopic)
{
    var eventName = message.GetType().Name.Replace(IntegrationEventSufix, "");
    var jsonMessage = JsonConvert.SerializeObject(message);
    var body = Encoding.UTF8.GetBytes(jsonMessage);

    var serviceBusErrorMessage = new Message
    {
        MessageId = Guid.NewGuid().ToString(),
        Body = body,
        Label = eventName,
    };
    await serviceBusTopic.AddAsync(serviceBusErrorMessage);
}

Finally, pushing the outcome to the client

I created a Service Bus Topic Subscription in the ASP project that allows me to notify the user what happened with the ‘create bank transaction’ request. For the subscription on the Service Bus topic, I used some helper methods from the eShopOnContainers project. I removed the RabbitMQ stuff leaving me with only a Service Bus connection and the ability to subscribe to certain messages.

I also added SignalR to my project and created a hub so I’m able to send the confirmation message and/or error message to the client (web browser). Then I added a handler for both the error message and the confirmation message. The handlers create an instance of the SignalR Hub and invoke the corresponding method on that SignalR hub.

public class TransactionCreatedIntegrationEventHandler : IIntegrationEventHandler
{
    private readonly IHubContext _hubContext;
    public async Task Handle(TransactionCreatedIntegrationEvent @event)
    {
        var hub = new TransactionsHub(_hubContext);
        await hub.TransactionCreated(new TransactionCreatedDto
        {
            ToAccountName = @event.ToAccountName,
            FromAccountName = @event.FromAccountName,
            NewBalance = @event.NewBalance,
            TransactionId = @event.TransactionId
        });
    }
    public TransactionCreatedIntegrationEventHandler(IHubContext hubContext)
    {
        _hubContext = hubContext;
    }
}

Pretty awesome right? The full demo source code it available on my GitHub page. I added an Angular client which enables you to post your transactions to the backend. The readme file of the project will explain how to get the project running.

Let me know what you thing in the comments below!

0 Comments

So I ran in to Azure Functions and realized I totally missed something there. One of my co-workers is a serverless advocate and kind of drew my attention about a year ago. And so I started exploring the world of serverless. First impression is that it’s hard to learn and complicated, but all these thoughts appeared to be not true… It’s just different. A different way of thinking and a different way of programming.

So as a lot of developers do, I started a project which made sort of sense and started learning while the project evolved. And now, a year has passed. What happened during that year? I created a couple of github repos for the project, threw them away, re-created repos and threw them away as well… And now, a few weeks ago, I started a new repo with some code that I thought was worth sharing. And that’s where we are today….

TL;DR – A cool and awesome URL Shortner project running Azure Functions in probably the cheapest way possible, hit https://4dn.me/azfuncs.

Answer the question please!?

So the question remains… Why Azure Functions are so cool? Well, because you implement them in the easiest way possible. They’re triggered by native cloud services and thus integrate very well in every cloud solution. They scale like a maniac so huge amounts of traffic are no problem. Oh and wait…. Almost forgot to mention that running Azure Functions is cheap… Really cheap!!

So the project I was talking about, is the classic project of an URL Shortner. You paste in a long huge endpoint URL. The service stores the URL and returns a short code which can be used to visit the URL.

I added login functionality so users are able to manage their short links and change the short code so it’s even easier to remember as long as the short code is unique.

Finally I want to track hits on each short link so you can see how many hits a short link received and even see the most recent hits in a graph.

If users don’t want to log in, they can just paste a URL and have it shortened. They miss the advantage of being able to change the short code and extend the life time of a short link. All links will expire, users will be able to set / change the expiration date. Anonymous visitors cannot change that date.

So what is an Azure Function?

Basically, very simple… An Azure Function is just a piece of code, running because it’s executed by a trigger. You want to keep functions lean and clean. Ideally functions have a single purpose (responsibility) and rely as little as possible on code libraries. For example importing EntityFramework in an Azure Function runs fine and works perfect. However the EF library is large and makes your slim and lean function a big rhino running through an Azure datacentre. What you’re looking for is an agile free runner able to manoeuvre though the datacentre at lightning speed.

To help you, there’s a mechanism called bindings. So functions have a trigger, and bindings. With bindings, you are able to connect to other cloud services like storage, the service bus, event grid, sendgrid and more. And best of all, if your binding is not available by default, you’re free to create one yourself. Bindings are input (stuff coming in) or output (stuff sent out).

A tiny example

An easy example is sending email. Sending an email message is a relatively heavy process for web applications. Sending email messages within a web request, may block additional incoming requests. You dont’s want this kind of processes in your web request. Writing a function that sends these email messages for your makes your system more distributed, and best of all, removes the heavy process from your web request. Basically you would store an email message to blob storage and add a message to a queue. A function with a queue trigger, an input binding reading the message from blob, and an output binding to send the message using sendgrid would be an excellent solution. And best of all, you just removed the pressure from your web app.

So how does my demo app work?

An endpoint URL is passed to the backend, which generates a unique short code and stores the link in table storage. Pretty straigt forward.

public static async Task CreateShortLink(
    [HttpTrigger(AuthorizationLevel.Anonymous, "post", Route = "links")] HttpRequestMessage req,
    [Table(Constants.TableStorageLinksTableName)] CloudTable table,
    ILogger log)

This function uses a HTTP Trigger to fire (e.g. it waits for a web request). It uses an input binding to table storage and accepts a CloudTable so I can query for existing short codes and store the new short link in case everything is fine.

Then a couple of validations take place, and a unique short code is generated. In the end, I use the table to store the new short link.

var entity = new ShortLinkEntity
{
    ShortCode = validShortCode,
    RowKey = Guid.NewGuid().ToString(),
    CreatedOn = DateTimeOffset.UtcNow,
    EndpointUrl = shortLinkDto.EndpointUrl,
    ExpiresOn = expirationDate,
    PartitionKey = Constants.TableStorageLinksPartitionKey,
    Timestamp = DateTimeOffset.UtcNow,
    TotalHits = 0,
    OwnerId = owner
};
var operation = TableOperation.Insert(entity);
var result = await table.ExecuteAsync(operation);

Then I return a HTTP response containing information about the new short link.

Now when one of the short links is hit, the system needs to find if the short code exists and retrieve the endpoint associated to that short link. But because this is a cool and fancy Azure Functions Demo app, I want to track hits per short link. So I write also write a ‘hit’ to a storage queue.

A different function will be triggered when a message arrives on that queue, and starts processing the information about that hit. Here is the entire function:

[FunctionName("ProcessIncomingHit")]
public static async void Run(
    [QueueTrigger(Constants.TableStorageQueueHits)]ShortLinkHitDto hitDto,
    [Table(Constants.TableStorageLinksTableName)] CloudTable table,
    [Table(Constants.TableStorageHitsTableName)] CloudTable hitsTable,
    ILogger log)
{
            
    log.LogInformation($"Hit received for processing {hitDto.ShortCode}");
    var fetchOperation =
        TableOperation.Retrieve(Constants.TableStorageLinksPartitionKey, hitDto.RowKey);
    var retrievedResult = await table.ExecuteAsync(fetchOperation);
    if (retrievedResult.Result is ShortLinkEntity shortLinkEntity)
    {
        var hitEntity = new HitEntity
        {
            PartitionKey = Constants.TableStorageHitsPartitionKey,
            RowKey = Guid.NewGuid().ToString(),
            ShortCode = hitDto.ShortCode,
            HitOn = hitDto.HitOn,
            Timestamp = DateTimeOffset.UtcNow
        };


        shortLinkEntity.TotalHits = shortLinkEntity.TotalHits + 1;
        var insertOperation = TableOperation.Insert(hitEntity);
        await hitsTable.ExecuteAsync(insertOperation);
        var updateOperation = TableOperation.InsertOrReplace(shortLinkEntity);
        await table.ExecuteAsync(updateOperation);
    }
}

Obviously, the function is triggered on arrival of a message on the storage queue. I added bindings to the original short links table, and to a hits table. The original short links table is used to increment a total hits counter of the short link. Also I add a new entity to a hits table. This is used by an aggregate function that allows me to draw a graph of the hits over the past week.

The full source code can be found here.

0 Comments

And so today I found this really cool feature in Azure Key Vault… We know by now what the Key Vault is, and what it’s capable of. But when developing an ASP.NET Core Web App, I found the solution not optimal. You can either sort of inject your secrets in the system like so:

But this is way from optimal in your development environment. Also, the API calls hide the config keys in the code, meaning that changes would require development and re-deployment of the system.

A second approach, fairly nice, is to use ARM templates. This means that app settings will be created during deployment of the template. The values contain the values of the secrets. This way, keys are easily changed, however it’s a bit confusing when values do change. In fact,the best way is to redeploy the ARM template when values change.

Personally I really like Visual Studio’s approach to have appsettings.json in your project so the file structure is obvious. And because you there are User Secrets, you don’t have to add sensitive data into the appsettings.json file. Just right-click the solution and choose ‘Manage User Secrets’. Values in the secrets.json file override values in the appsettings.json file.

But now the Key Vault… I think the configuration settings of an Azure Web App are ideal to add / overwrite configuration. Just click your Web App in the Azure Portal, and go to the configuration blade. You can add values here and allow them to overwrite values in appsettings.json as well. Problem is, that secrets entered here, are no secret anymore. Time for a change.

keyvault-demo-web-app-identityI created a new resource group on Azure, and added a Key Vault and a Web App. Please note it’s not mandatory to have everything in the same resource group. Now we need to tell the Key Vault, that the web app is allowed to access our secrets. In order to do so, the Web App must become an identity so we can reference it. Click on your web app, so you can access it’s properties and look for the Identity settings. Once selected you can switch the toggle to On, click save and then confirm your action.

keyvault-demo-access-policiesThen go to your Key Vault and find the Access Policies tab. Here you click ‘Add access policy’. A new blade opens that allows you to select users and apps and grant permissions to them. Because I created an identity out of the Web App (I named my web app disco-ball) I’m able to select it at the ‘select principal’ field. I leave the template field alone, I decided to only set individual permissions. My disco-ball app can only get keys, secrets and certificates. Nothing more. The settings blade for your access policy should look something similar like mine..

keyvault-demo-access-policy

Click the Add button to return to the list of access policies and don’t forget to hit the save button there. I always forget the save button and then wonder why nothing works…

What we did here, was grant access to the Key Vault, for the disco-ball web app. But it is only capable of reading keys, permissions and certificates. And now on to why I think this is such a nice solution. Let’s make a secret, for example a connection string to a storage account. Go to your Key Vault, open the Secrets blade and click Generate/Import. Enter a name for your secret, let’s say StorageConnectionString and the value (I’m not going to post one here, because it’s secret obviously). Click the Create button and you will return to your list of secrets. keyvault-demo-secret-detailsSurprise, your first secret is there. Now to read this secret from your web app, you click on the secret and open this specific version of the secret you want to use. If this is a new secret, only one version will be available. If you change the values over time, new versions will be created. Now when you opened the version you want, you see the details and more important, the ‘Secret identifier’. Click the ‘Copy to clipboard’ icon just after the ‘Secret identifier’ field.

Now we head back to your Web App and open the configuration blade. This configuration blade can be used to override app settings of your Web App. If you’re not familiar with this technique you may want to read the documentation first.

Anyway, create a new application setting and name if similar to the setting in your appsettings.json. And for the value, you enter drum rolls…….

@Microsoft.KeyVault(SecretUri=<secret-identifier-here>)

And that’s the trick! Your setting value is now changed with the value of your secret. Personally I think this is a really nice solution because you don’t have to setup a Key Vault Client or change anything in your ‘regular development flow’, you can even use the appsettings.json and secrets.json in your local development environment without any problems, a clean, fast and neat solution. Let me know what you think in the comments below!

0 Comments

So as promised, I'm back with part 2 of the Valet Key design pattern blog. In part 1 I showed how the design pattern works, how it works on Azure and how to implement the pattern using ASP.NET Core. In this second part, I'll show how to implement the client solution. I chose to use an Angular client. Not only because I like Angular, also because it's nice to have a javascript implementation directly uploading into the cloud.

Let's briefly resume part 1

I wrote a backend system, that creates a Secure Access Signature (or SAS). That SAS is suitable for a specific endpoint. This endpoint is represents a BLOB in Azure Storage. The SAS grants write-access to that endpoint. The backend system returns the endpoint and the corresponding SAS.

Make use of the Azure Blob Storage javascript

Microsoft distributes a package of javascript files that enable you to communicate with an Azure Storage account. You can download that package at https://aka.ms/downloadazurestoragejs. I used version 2.10.103 in my project. If you unzip the package, you actually only need the azure-storage.blob.js in the bundle folder.

Next up, Angular

Now you need to create an angular project. If you don't feel like creating this project all by yourself, you may want to take a peek at my sample project on Github.

To create an Angular project, download and install NodeJS 10 and open a console window.

When I create an Angular project, I like to configure SASS as default style type, and for this project let's disable testing. Also I added the Angular router which is a huge overkill for this project.

npm i @angular/cli -g
ng new upload-example --styles=scss --skip-tests --routing=true
cd upload-example
ng serve –o

You just installed the Angular CLI (globally), created a new Angular project. A new angular project will be created in a folder with the same name, so we navigate to the newly created project and start serving it with a development server. The -o parameter opens the project in your default browser when the compiler is done and the dev server is started.

Now in the assets folder, create a new folder 'scripts' and add the azure-storage.blob.js file. Then make sure Angular outputs that script by adding it to angular.json

"scripts": ["src/assets/scripts/azure-storage.blob.js"]

Then replace all HTML in the app.component.html file with <router-outlet></router-outlet>.

Adding services

OK, now the hard part. I found this clever guy Stuart who had a great implementation of uploading files to blob storage. I extended his services to make a call to the backend prior to uploading, in order to get a valid endpoint and SAS. So in the app folder, I created a new folder services and added azure-storage.ts and blob-storage.service.ts.

Coming home

Then I added a module and component for the landing page:

ng g module pages/home
ng g component pages/home/home

This generates a Home Module and a Home Component in it. Be sure to import the HomeModule in your app.module.ts, else Angular will not be able to show the component.

The home.component.ts contains some methods that allow the upload to happen. But first, let's add a file select box to the home.component.html file. Note the (change) event passing changes to the home component.

onFileChange(event: any): void {
   this.filesSelected = true;
   this.uploadProgress$ = from(event.target.files as FileList).pipe(
     map(file => this.uploadFile(file)),
     combineAll()
   );
}

As you can see, the event handler loops through all files selected and fires this.uploadFile() passing the selected file. The uploadFile() method accepts the file and requests an endpoint and a SAS from our backend system.

uploadFile(file: File): Observable<IUploadProgress> {
   return this.blobStorage
     .aquireSasToken(file.name)
     .pipe(
       switchMap((e: ISasToken) =>
         this.blobStorage
           .uploadToBlobStorage(e, file)
           .pipe(map(progress => this.mapProgress(file, progress)))
       )
     );
}

The service contains a method aquireSasToken() which calls our backend. The backend uses a valid Azure Storage Account connection string to create a SAS for a certain endpoint and returns this information. Then the uploadToBlobStorage() method is called, which uses the SAS to determine where to upload to, and also accepts the file. The mapProgress() keeps track of the upload progress and shows it in percentages.

One final note

valet-key-corsJust like your ASP.NET Web Application, an Azure Storage Account is protected with CORS. So in order to upload blobs using a JavaScript system like Angular you need set some CORS rules in Azure. This Angular project runs on localhost port 4200, so I added this origin, accepting all methods and all headers. Note these settings are not recommended in production environments.