0 Comments

So it’s 2019 now, almost 2020. More and more companies start migrating solutions to the cloud. In my profession, building cloud and web solutions, I see that companies start migrating their solutions to different architectures, more suitable for cloud environments. Classic web systems are replaced by serverless systems and huge IT systems are thorn apart into microservices. Since Azure Functions became mature, they’re a really good replacement for classic ASP.NET Web solutions running on a Web App or on IIS. So I started using these serverless solutions more and more. Because I also like to create user-friendly software I often use the SignalR real-time framework to notify the user of processes going on on the server. For example, when sending a command to a serverless function, you may want to inform the user whether processing that command was successful (or not). In the past, you required a web host to run SignalR, but running a web host in the cloud is relatively expensive. Today, SignalR is one of the native cloud services Azure can deliver. During this blog, I’m going to implement this SignalR service.

The demo project

Developers always create demo projects to try something new. The idea is great, but there’s never time to finish the project and so it lands in the trash can somewhere between now and 5 years. So for this blog, to show how the SignalR Service works, I… Yes… created a demo project. It’s an Angular front-end project uploading images to Azure Blob Storage. An Azure Function will be triggered by the blob creation and start resizing the image into two versions, a thumbnail and a fairly decent web size (1024 x 768). Image references are stored in Azure Table Storage and once both images are sized correctly, the state of the image in Table Storage will be set to available. Then a message will be broadcasted using SignalR, which enables the front-end system to respond. Pretty awesome, you could also use this exact same scenario for example when importing data. Just upload the data, make a function that imports, and report status through SignalR.

signalr-service-createSo first I navigated to the Azure Portal and started creating a SignalR Service.

Now when the Azure created the resource, navigate to the newly created SignalR Services and open the Keys blade. Here you’ll find two keys and two connection strings. Copy one of the connection strings, you’re going to need that one in the Azure Functions project. Then navigate to the CORS blade and validate if there’s an allowed origin *. If not, add it. You may want to change this to a valid endpoint once your system goes to production, but for this demo, you’ll be fine. Please note, that I selected Serverless as a ServiceMode. This mode should only be selected when you use SignalR from an Azure Functions project.

Next Up, The Functions project

Now open Visual Studio, I used VS 2019 (16.3.18) and Azure Functions v2. Create a new Azure Functions project and see if your project contains a local.settings.json file. If not, create it and add the copied Connection String value as a setting called ‘AzureSignalRConnectionString’. Your local.settings.json looks like this (or something similar):

{
  "IsEncrypted": false,
  "Values": {
    "AzureWebJobsStorage": "UseDevelopmentStorage=true",
    "AzureSignalRConnectionString": "Endpoint=https://your-signalr.service.signalr.net;AccessKey=/--secred-access-key-here--/;Version=1.0;"
  },
  "Host": {
    "LocalHttpPort": 7071,
    "CORS": "http://localhost:4200",
    "CORSCredentials": true
  }
}

The Angular client makes HTTP requests to the negotiate function to initiate the connection negotiation. When the client application is hosted on a different domain than the Azure Function app, cross-origin resource sharing (CORS) must be enabled on the Function app or the browser will block the requests. This is why I also added some CORS settings in the settings file. I know my Angular client is going to run on localhost port 4200. Once again, you may want to change these settings once you go to production.

As you all know, an Azure Function is fired by a trigger and may use bindings (input and/or output) to use external data or services, or send data to external services. We’re going to use a SignalR Output Binding which means we send data out to the SignalR Service. This data fires an event on the client which can be handled accordingly. The bindings for the SignalR Service can be installed by adding a NuGet package to your project. Look for the packed called Microsoft.Azure.WebJobs.Extensions.SignalRService, My project used version 1.0.2, just so you know.

Now it’s time to implement the negotiate endpoint. SignalR uses this endpoint to initiate a connection and determine server and client capabilities. In your Azure Functions project, create a new endpoint with an HTTP trigger and which looks like this:

[FunctionName("negotiate")]
public static SignalRConnectionInfo SignalRNegotiate(
    [HttpTrigger(AuthorizationLevel.Anonymous, "post")]  HttpRequestMessage req,
    [SignalRConnectionInfo(HubName = "notifications")] SignalRConnectionInfo connectionInfo)
{
    return connectionInfo;
}

That's pretty much all there is to it. This endpoint allows you to connect to the SignalR service. Connecting to this endpoint redirects to your SignalR Service, which in turn returns its capabilities (like available transport types and so).

I explained I persist a reference to uploaded pictures in table storage. Once a file is uploaded and successfully scaled, I send a command on a queue that sets an availability flag on the picture entity in table storage. When the table entity is successfully updated, I send a message through the SignalR Service.

The function looks like so (I stripped code which doesn’t add value for this demo):

[FunctionName("PictureStatusCommandQueueHandler")]
public static async Task PictureStatusCommandQueueHandler(
    [QueueTrigger(Constants.QueueNamePictureStatusCommands)] string pictureStatusCommandMessage,
    [Table(TableNames.Pictures)] CloudTable picturesTable,
    [SignalR(HubName = SignalRHubNames.NotificationsHub)] IAsyncCollector signalRMessages,
    ILogger log)
{
    log.LogInformation("Picture status command retrieved");
    SetStorageConsumptionCommand consumptionCommand = null;
    ...
    if (...)
    {

	...
        Update the table entity here
        ...

        var pictureDto = new PictureDto
        {
            CreatedOn = entity.Timestamp,
            Id = Guid.Parse(entity.RowKey),
            Name = entity.Name,
            Url = picturesTable.ServiceClient.BaseUri.ToString()
        };
        await signalRMessages.AddAsync(
            new SignalRMessage
            {
                Target = "newPicture",
                Arguments = new object[] { pictureDto }
            });
        }
    }
    return consumptionCommand;
}

So what happens here is basically that I create a Data Transfer Object (DTO), which I want to push to the client, and I happen to use SignalR as a mechanism to do that for me. The DTO will be converted to JSON and passed to the client. The Target here (newPicture) is the event that will be raised client side, and the arguments can be seen as the payload of that message.

The Angular project

Before we run into a discussion that doesn’t make sense… I’m a cloud solution architect and I really like C# and the Microsoft development stack. I also have a strong affinity with Angular. Because I use Angular as a demo project doesn’t mean it’s the best solution. Vue, React and all other frameworks/component libraries work fine! So I created this Angular project and inside that project created a service. This service uses the @aspnet/signalr package so you need to install that. For your information, my demo project used version 1.1.4.

npm i @aspnet/signalr

or yarn if you like

yarn add @aspnet/signalr

Now back to the service, since the service is quite large, I created a Github Gist here. The service contains a connect and a disconnect function. The endpoint to connect to is your Azure Functions project URL http://{az-functions-project}/api

By connecting to that location, the SignalR client will send a post request to the negotiate endpoint of your Azure Functions project, and the SignalR service does the rest for you.

Now if your scroll down to line 22 of the gist, you see this code:

this.connection.on('newPicture', (dto: PictureDto) => {
    console.log('New picture arrived');
});

This fragment subscribes to the ‘newPicture’ event. Remember the Azure Function in which we send a message with a target ‘newPicture’? Well, this is the event handler on the client handling that event. In this case, a message is written to the browser’s console, but you also see the dto of type PictureDto, which contains the actual information about the image as it was passed by the Azure Function.

Now create a component that consumes the realtime service and calls the service’s connectSignalR() function and you’re good to go!!

I have quite some history with SignalR, so I expected a very complicated solution. It took me some time to figure out how the SignalR service is implemented, but mostly because I expected something difficult. The reality is that the SignalR Service integrates extremely well and lowers the complexity bar big time! Have fun experimenting!

0 Comments

Today, everyone is moving to the cloud with their software system. Personally I’m pretty much fan of Microsoft Azure. My job is to support companies migrating software systems to the cloud. What I see, is that a lot of companies and developers don’t really know how cloud solutions work and how you can make them work for you.

One system in Microsoft Azure is the Service Bus. It’s a messaging system that is designed for software systems, or software components to communicate with each other.

Now when you have a ASP.NET website running somewhere in a data center of choice and you want to move to the cloud (Azure), you simply create a Web App and just host the website as is. However, when your system is getting more and more load, you need to scale (up or out), which is fairly expensive. You can save lot of money by investigating why your system demands these resources and why scaling up or out is a requirement.

Often, there is just one single part of the website demanding these resources, while all the other parts are running just fine. A bank for example, the services for creating a new account, changing an address, or request a new debit card demand way less resources then for example the transactions service allowing money transfers. In such a case, it could be valuable to try and get the pressure off the transactions service, by distributing the work load. The Service Bus is an excellent native cloud service that will definitely help you and I’m going to explain how.

The basics of the Service Bus

So what is this Service Bus thing? Well, basically a very simple messaging mechanism. It contains Queues and Topics. The difference is that a Queue is, like it’s name assumes, a queue of messages. Each message will be delivered only once to any system reading from that queue. For example, when you make a bank transfer, you want that transfers to take place only once. So when multiple systems read from the queue, and a new message arrives, only one of those systems will receive the message. A topic can be compared to a newspaper, or your favorite magazine. Whoever has a subscription, gets the message as soon as it comes out. So if multiple systems have a subscription, to a certain message, the message will be delivered multiple times.

A tiny side-step to Microservices

In case you’re developing Microservices, you may need a messaging system to make sure you meet the eventual consistency requirement. Only one microservice will be responsible of manipulating a certain entity, but more services may need to receive an update of the changed entity. The Service Bus would be an excellent solution here, because you can easily broadcast the updated entity through a topic. All services that may need this update can subscribe to that certain message.

A practical example

So here we go, an example that makes sense. Let’s take the bank example, having a transactions service that demands a lot of resources because it’s drawing a lot of traffic, and a lot of validations are going on during each request. Therefore a good subject for change.

[HttpPost]
public async Task Post([FromBody] CreateTransactionDto dto)
{
    if (ModelState.IsValid)
    {
        dto.TransactionOn = DateTimeOffset.UtcNow;
        var messageBody = JsonConvert.SerializeObject(dto);
        var message = new Message(Encoding.UTF8.GetBytes(messageBody));
        await _queueClient.SendAsync(message);
        return Accepted(dto);
    }
    return BadRequest();
}

In the previous block of code, I removed all validations and ‘heavy’ stuff that demand a lot of resources. Usually when you create a bank transaction, a large amount of validations are required to make sure the transaction can actually take place. The only validation done here, is the ModelState validation. Next thing is creating a Service Bus message which is sent to a queue client. In this example I return an excepted HTTP response to indicate that I ‘accepted the request of creating a bank transaction’. The process of creating a bank transaction is now officially distributed, YESSSS!

Now, handling the message

Now I need a mechanism that handles the queue message and will actually create the bank transaction for me. I decided to create an Azure Function, because they’re fast, cheap and scale like a maniac. So this solution not only takes the pressure off the old web solution, but is also distributed in a system that behaves depending on the load and is thus pretty future proof.

[FunctionName("CreateTransaction")]
public static async void CreateTransaction(
    [ServiceBusTrigger("transactions", Connection = "AzureServiceBus")] string message,
    [ServiceBus("bank", Connection = "AzureServiceBus", EntityType = EntityType.Topic)] IAsyncCollector serviceBusTopic,
    [Table("transactions")] IAsyncCollector table,
    ILogger log)
{

    var transaction = JsonConvert.DeserializeObject(message);
    if (transaction != null)
    {
        if (transaction.Amount > 100)
        {
            var integrationEvent = new TransactionCreateFailedIntegrationEvent
            {
                Amount = transaction.Amount,
                FromAccountName = transaction.FromAccountHolder,
                ToAccountName = transaction.ToAccountHolder,
                Reason = "Maximum transaction amount is 100"
            };
            await SendServicebusMessage(integrationEvent, serviceBusTopic);
        }
        else
        {
            var transactionEntity = new TransactionEntity
            {
                PartitionKey = "transaction",
                RowKey = Guid.NewGuid().ToString(),
                FromAccountNumber = transaction.FromAccountNumber,
                FromAccountHolder = transaction.FromAccountHolder,
                ToAccountNumber = transaction.ToAccountNumber,
                ToAccountHolder = transaction.ToAccountHolder,
                Amount = transaction.Amount,
                Description = transaction.Description,
                TransactionOn = transaction.TransactionOn,
                Timestamp = DateTimeOffset.UtcNow
            };
            await table.AddAsync(transactionEntity);
            var integrationEvent = new TransactionCreatedIntegrationEvent
            {
                TransactionId = Guid.Parse( transactionEntity.RowKey),
                FromAccountName= transaction.FromAccountHolder,
                ToAccountName= transaction.ToAccountHolder,
                NewBalance = 3581.53M
            };
            await SendServicebusMessage(integrationEvent, serviceBusTopic);
        }
        await serviceBusTopic.FlushAsync();
    }
}

I know, it’s a large method which may need some refactoring in a production environment (or not), but for this demo works pretty fine. You can see I use the Service Bus Queue Trigger to fire the Azure Function. This way, each and every transaction is executed only once, by an instance of the Azure Function. I implemented a validation rule for demo purpose. The amount of the bank transaction cannot be greater than 100. If the transaction meets this validation rule it will be stored in table storage. When the validation fails, or succeeds, I create an integration event which will be sent to a Service Bus Topic. This mechanism allows me to notify the user what actually happened with the create bank transaction request.

Oh and by the way, the SendServiceBusMessage() function looks like this:

private static async Task SendServicebusMessage(T message, IAsyncCollector serviceBusTopic)
{
    var eventName = message.GetType().Name.Replace(IntegrationEventSufix, "");
    var jsonMessage = JsonConvert.SerializeObject(message);
    var body = Encoding.UTF8.GetBytes(jsonMessage);

    var serviceBusErrorMessage = new Message
    {
        MessageId = Guid.NewGuid().ToString(),
        Body = body,
        Label = eventName,
    };
    await serviceBusTopic.AddAsync(serviceBusErrorMessage);
}

Finally, pushing the outcome to the client

I created a Service Bus Topic Subscription in the ASP project that allows me to notify the user what happened with the ‘create bank transaction’ request. For the subscription on the Service Bus topic, I used some helper methods from the eShopOnContainers project. I removed the RabbitMQ stuff leaving me with only a Service Bus connection and the ability to subscribe to certain messages.

I also added SignalR to my project and created a hub so I’m able to send the confirmation message and/or error message to the client (web browser). Then I added a handler for both the error message and the confirmation message. The handlers create an instance of the SignalR Hub and invoke the corresponding method on that SignalR hub.

public class TransactionCreatedIntegrationEventHandler : IIntegrationEventHandler
{
    private readonly IHubContext _hubContext;
    public async Task Handle(TransactionCreatedIntegrationEvent @event)
    {
        var hub = new TransactionsHub(_hubContext);
        await hub.TransactionCreated(new TransactionCreatedDto
        {
            ToAccountName = @event.ToAccountName,
            FromAccountName = @event.FromAccountName,
            NewBalance = @event.NewBalance,
            TransactionId = @event.TransactionId
        });
    }
    public TransactionCreatedIntegrationEventHandler(IHubContext hubContext)
    {
        _hubContext = hubContext;
    }
}

Pretty awesome right? The full demo source code it available on my GitHub page. I added an Angular client which enables you to post your transactions to the backend. The readme file of the project will explain how to get the project running.

Let me know what you thing in the comments below!

0 Comments

So I ran in to Azure Functions and realized I totally missed something there. One of my co-workers is a serverless advocate and kind of drew my attention about a year ago. And so I started exploring the world of serverless. First impression is that it’s hard to learn and complicated, but all these thoughts appeared to be not true… It’s just different. A different way of thinking and a different way of programming.

So as a lot of developers do, I started a project which made sort of sense and started learning while the project evolved. And now, a year has passed. What happened during that year? I created a couple of github repos for the project, threw them away, re-created repos and threw them away as well… And now, a few weeks ago, I started a new repo with some code that I thought was worth sharing. And that’s where we are today….

TL;DR – A cool and awesome URL Shortner project running Azure Functions in probably the cheapest way possible, hit https://4dn.me/azfuncs.

Answer the question please!?

So the question remains… Why Azure Functions are so cool? Well, because you implement them in the easiest way possible. They’re triggered by native cloud services and thus integrate very well in every cloud solution. They scale like a maniac so huge amounts of traffic are no problem. Oh and wait…. Almost forgot to mention that running Azure Functions is cheap… Really cheap!!

So the project I was talking about, is the classic project of an URL Shortner. You paste in a long huge endpoint URL. The service stores the URL and returns a short code which can be used to visit the URL.

I added login functionality so users are able to manage their short links and change the short code so it’s even easier to remember as long as the short code is unique.

Finally I want to track hits on each short link so you can see how many hits a short link received and even see the most recent hits in a graph.

If users don’t want to log in, they can just paste a URL and have it shortened. They miss the advantage of being able to change the short code and extend the life time of a short link. All links will expire, users will be able to set / change the expiration date. Anonymous visitors cannot change that date.

So what is an Azure Function?

Basically, very simple… An Azure Function is just a piece of code, running because it’s executed by a trigger. You want to keep functions lean and clean. Ideally functions have a single purpose (responsibility) and rely as little as possible on code libraries. For example importing EntityFramework in an Azure Function runs fine and works perfect. However the EF library is large and makes your slim and lean function a big rhino running through an Azure datacentre. What you’re looking for is an agile free runner able to manoeuvre though the datacentre at lightning speed.

To help you, there’s a mechanism called bindings. So functions have a trigger, and bindings. With bindings, you are able to connect to other cloud services like storage, the service bus, event grid, sendgrid and more. And best of all, if your binding is not available by default, you’re free to create one yourself. Bindings are input (stuff coming in) or output (stuff sent out).

A tiny example

An easy example is sending email. Sending an email message is a relatively heavy process for web applications. Sending email messages within a web request, may block additional incoming requests. You dont’s want this kind of processes in your web request. Writing a function that sends these email messages for your makes your system more distributed, and best of all, removes the heavy process from your web request. Basically you would store an email message to blob storage and add a message to a queue. A function with a queue trigger, an input binding reading the message from blob, and an output binding to send the message using sendgrid would be an excellent solution. And best of all, you just removed the pressure from your web app.

So how does my demo app work?

An endpoint URL is passed to the backend, which generates a unique short code and stores the link in table storage. Pretty straigt forward.

public static async Task CreateShortLink(
    [HttpTrigger(AuthorizationLevel.Anonymous, "post", Route = "links")] HttpRequestMessage req,
    [Table(Constants.TableStorageLinksTableName)] CloudTable table,
    ILogger log)

This function uses a HTTP Trigger to fire (e.g. it waits for a web request). It uses an input binding to table storage and accepts a CloudTable so I can query for existing short codes and store the new short link in case everything is fine.

Then a couple of validations take place, and a unique short code is generated. In the end, I use the table to store the new short link.

var entity = new ShortLinkEntity
{
    ShortCode = validShortCode,
    RowKey = Guid.NewGuid().ToString(),
    CreatedOn = DateTimeOffset.UtcNow,
    EndpointUrl = shortLinkDto.EndpointUrl,
    ExpiresOn = expirationDate,
    PartitionKey = Constants.TableStorageLinksPartitionKey,
    Timestamp = DateTimeOffset.UtcNow,
    TotalHits = 0,
    OwnerId = owner
};
var operation = TableOperation.Insert(entity);
var result = await table.ExecuteAsync(operation);

Then I return a HTTP response containing information about the new short link.

Now when one of the short links is hit, the system needs to find if the short code exists and retrieve the endpoint associated to that short link. But because this is a cool and fancy Azure Functions Demo app, I want to track hits per short link. So I write also write a ‘hit’ to a storage queue.

A different function will be triggered when a message arrives on that queue, and starts processing the information about that hit. Here is the entire function:

[FunctionName("ProcessIncomingHit")]
public static async void Run(
    [QueueTrigger(Constants.TableStorageQueueHits)]ShortLinkHitDto hitDto,
    [Table(Constants.TableStorageLinksTableName)] CloudTable table,
    [Table(Constants.TableStorageHitsTableName)] CloudTable hitsTable,
    ILogger log)
{
            
    log.LogInformation($"Hit received for processing {hitDto.ShortCode}");
    var fetchOperation =
        TableOperation.Retrieve(Constants.TableStorageLinksPartitionKey, hitDto.RowKey);
    var retrievedResult = await table.ExecuteAsync(fetchOperation);
    if (retrievedResult.Result is ShortLinkEntity shortLinkEntity)
    {
        var hitEntity = new HitEntity
        {
            PartitionKey = Constants.TableStorageHitsPartitionKey,
            RowKey = Guid.NewGuid().ToString(),
            ShortCode = hitDto.ShortCode,
            HitOn = hitDto.HitOn,
            Timestamp = DateTimeOffset.UtcNow
        };


        shortLinkEntity.TotalHits = shortLinkEntity.TotalHits + 1;
        var insertOperation = TableOperation.Insert(hitEntity);
        await hitsTable.ExecuteAsync(insertOperation);
        var updateOperation = TableOperation.InsertOrReplace(shortLinkEntity);
        await table.ExecuteAsync(updateOperation);
    }
}

Obviously, the function is triggered on arrival of a message on the storage queue. I added bindings to the original short links table, and to a hits table. The original short links table is used to increment a total hits counter of the short link. Also I add a new entity to a hits table. This is used by an aggregate function that allows me to draw a graph of the hits over the past week.

The full source code can be found here.

0 Comments

And so today I found this really cool feature in Azure Key Vault… We know by now what the Key Vault is, and what it’s capable of. But when developing an ASP.NET Core Web App, I found the solution not optimal. You can either sort of inject your secrets in the system like so:

But this is way from optimal in your development environment. Also, the API calls hide the config keys in the code, meaning that changes would require development and re-deployment of the system.

A second approach, fairly nice, is to use ARM templates. This means that app settings will be created during deployment of the template. The values contain the values of the secrets. This way, keys are easily changed, however it’s a bit confusing when values do change. In fact,the best way is to redeploy the ARM template when values change.

Personally I really like Visual Studio’s approach to have appsettings.json in your project so the file structure is obvious. And because you there are User Secrets, you don’t have to add sensitive data into the appsettings.json file. Just right-click the solution and choose ‘Manage User Secrets’. Values in the secrets.json file override values in the appsettings.json file.

But now the Key Vault… I think the configuration settings of an Azure Web App are ideal to add / overwrite configuration. Just click your Web App in the Azure Portal, and go to the configuration blade. You can add values here and allow them to overwrite values in appsettings.json as well. Problem is, that secrets entered here, are no secret anymore. Time for a change.

keyvault-demo-web-app-identityI created a new resource group on Azure, and added a Key Vault and a Web App. Please note it’s not mandatory to have everything in the same resource group. Now we need to tell the Key Vault, that the web app is allowed to access our secrets. In order to do so, the Web App must become an identity so we can reference it. Click on your web app, so you can access it’s properties and look for the Identity settings. Once selected you can switch the toggle to On, click save and then confirm your action.

keyvault-demo-access-policiesThen go to your Key Vault and find the Access Policies tab. Here you click ‘Add access policy’. A new blade opens that allows you to select users and apps and grant permissions to them. Because I created an identity out of the Web App (I named my web app disco-ball) I’m able to select it at the ‘select principal’ field. I leave the template field alone, I decided to only set individual permissions. My disco-ball app can only get keys, secrets and certificates. Nothing more. The settings blade for your access policy should look something similar like mine..

keyvault-demo-access-policy

Click the Add button to return to the list of access policies and don’t forget to hit the save button there. I always forget the save button and then wonder why nothing works…

What we did here, was grant access to the Key Vault, for the disco-ball web app. But it is only capable of reading keys, permissions and certificates. And now on to why I think this is such a nice solution. Let’s make a secret, for example a connection string to a storage account. Go to your Key Vault, open the Secrets blade and click Generate/Import. Enter a name for your secret, let’s say StorageConnectionString and the value (I’m not going to post one here, because it’s secret obviously). Click the Create button and you will return to your list of secrets. keyvault-demo-secret-detailsSurprise, your first secret is there. Now to read this secret from your web app, you click on the secret and open this specific version of the secret you want to use. If this is a new secret, only one version will be available. If you change the values over time, new versions will be created. Now when you opened the version you want, you see the details and more important, the ‘Secret identifier’. Click the ‘Copy to clipboard’ icon just after the ‘Secret identifier’ field.

Now we head back to your Web App and open the configuration blade. This configuration blade can be used to override app settings of your Web App. If you’re not familiar with this technique you may want to read the documentation first.

Anyway, create a new application setting and name if similar to the setting in your appsettings.json. And for the value, you enter drum rolls…….

@Microsoft.KeyVault(SecretUri=<secret-identifier-here>)

And that’s the trick! Your setting value is now changed with the value of your secret. Personally I think this is a really nice solution because you don’t have to setup a Key Vault Client or change anything in your ‘regular development flow’, you can even use the appsettings.json and secrets.json in your local development environment without any problems, a clean, fast and neat solution. Let me know what you think in the comments below!

0 Comments

So here's some thought about DDD. I really love the thought and principles of DDD (Domain Driven Design) and I really recommend looking into it. That's why it is time for a new blog. Let's call it a practical introduction to DDD for C# developers.

This is the first post of a series. This post is an introduction to DDD and how to build a domain model.

So, what is DDD? You probably know the meaning of the abbreviation by know, but what does it really mean? The answer to that question is easy yet complicated. DDD is a huge thing and has a whole lot involved, but basically you're just dividing functionalities of your system in separate domains. In the classic example of a web shop, the catalogue, the basket and the order process would all live in a separate domain. This may also be the reason why DDD and Microservices are such a good marriage, however leveraging the power of DDD doesn't necessarily mean your technical architecture must be Microservices. You can enjoy the advantages of DDD in a huge monolith as well.

All functionalities that you pack in a domain is called the bounded context. When starting a Microservices architecture you probably may want each bounded context in a separate Microservices although this isn't true for all situations, be sure to evaluate your decisions. Now in this world of DDD, there's also someone called the domain expert. This guy is the smartest in class of a given domain and can tell you everything about it. Compared to agile/scrum you may identify the domain expert as a Product Owner, but for a specific domain. Some domains have an expert that is actually the same person. It is this crucial piece of information where you can get confused. Having different experts for different domains may also introduce a difference in terminology. In DDD, we think that's fine… For example, an entity may be called a User in the first domain, but a Customer in the second domain, although they originated from the very same entity.

Bounded contexts, as the word says, have a huge boundary around them. This means that all functionalities and infrastructure involved with a domain are separated from other domains. Different domains for example, should not share the same data store. This may become a little challenging when dealing with the situation that a certain entity should live in multiple domains, for example the user and the customer. A messaging system must be configured to synchronize the changes between domains. Of course in case a data store is separated, which is (again) recommended. Eventual consistency is very important. Be sure to have a good solution in place. If a user changes his email address in the user service and places an order, you don't want the order service to send a confirmation email to the old address. The new email address should be synchronized to the order service so it 'knows' the new address. One important rule of DDD is that only one domain can change a certain entity. So if an email address belongs to a user, only the user service can change it. All domains may use the email field of a user, but only one can change it.

So what’s in it for me?

So in a couple of brief paragraphs, I summed of a couple of fundamentals of DDD. These help you along the way making decisions as you go. A lot of rules to keep in mind, there must be a benefit somewhere… And oh yes there is… Why would you be writing the DDD way, what’s the advantage why why why?

Well, the answer to that is basically given in the previous paragraphs. Lets point them out…

There are not many company processes known by a single person. No C-Level manager of a huge online web store like Amazon knows the details of the packaging process. The packing process manager does. So making this guy the domain expert of the packing process in your software makes sense. Also, the packing software will then probably contain terminology and names known by the ‘packaging process guys’. There are no translations between the domain expert and the software solution. Centralizing knowledge is key, because with that the business is capable of ensuring that understanding the software is not locked in ‘tribal knowledge’. This means that the information about what the software does is open and everyone can contribute. The developer is not (anymore) the only one who knows the entire process of a business.

And finally I think a well designed piece of software that uses the principles of DDD is way more easy to maintain compared to traditional techniques. And I experienced less ‘fixing one moving part, breaks another’ moments. All the moving parts are still in place, but not dependant of each other anymore.

Your first domain model

So the basics are easy. I want to create a domain model for a user. The user has an ID, name, email address and a password. Then I also want to track a created and expiration date. So I start with a couple of properties:

public Guid Id { get; private set; }
public string DisplayName { get; private set; }
public string EmailAddress { get; private set; }
public Password Password { get; private set; }
public DateTimeOffset CreatedOn { get; private set; }
public DateTimeOffset ExpiresOn { get; private set; }

Note that the setters of all properties are private. This is to prevent external systems from changing the values of the properties. You may want the EmailAddress property to be validated for example. If the EmailAddress property is public (and thus settable for other classes) you cannot guarantee that value of the property is always correct, and therefore you can not guarantee the valid state of the domain model. So instead, all setters are private so nobody can harm the correct state of our domain model. Now to change the email address, we need to add a method that does so.

public void SetEmailAddress(string value)
{
    if (string.IsNullOrWhiteSpace(value))
    {
        throw new ArgumentNullException(nameof(value));
    }
    if (!Equals(EmailAddress, value))
    {
        if (value.IsValidEmailAddress())
        {
            EmailAddress = value;
        }
        else
        {
            throw new ArgumentException($"The value {value} is not a valid email address");
        }
    }
}

You can see that all validations for an email address (or the validations required for this system) are done inside the SetEmailAddress() method and the valid of the EmailAddress property only changes when the new email address is valid according to the business rules. These business rules by the way, are defined by the domain expert.

I think a domain model has two kinds of constructors, one constructor is to create a new object, for example a user. The second one is to re-produce an existing object from (for example) a data store. The difference is (in my opinion) than when creating a new user, you pass the minimal required fields to a constructor to create a new valid domain model. In this example, the email address of the user is mandatory. Let’s say it’s the only mandatory field. Then the constructor of a new user will accept only one parameter, the email address. The constructor will create a new instance of the domain model class, call the SetEmailAddress() method to set the passed email address and return the new created object. This way, all validations on the email address are validated so when everything runs fine, we end up with a model only containing an email address, but it’s a valid domain model.

public User(string emailAddress)
{
    Id = Guid.NewGuid();
    SetEmailAddress(emailAddress);
    CreatedOn = DateTimeOffset.UtcNow;;
}

Now if you have more information available about the user, let’s say his display name and a password, you create additional Set methods like the SetEmailAddress() method, validate the passed information and then change the property value as soon as everything is fine. You also see that I add some default values as well.

Now you can pass that user to a repository somewhere in order to store is somewhere safe. Now in case you want to change information of a certain user, you fetch that information from the data store and reproduce the domain model.

This procedure uses the second constructor. The second constructor accepts all fields in the domain model and will instantly create it.

public User(Guid id, 
    string emailAddress, 
    string displayName,
    Password pwd, 
    DateTimeOffset created,
    DateTimeOffset expires)
{
    Id = id;
    EmailAddress = emailAddress;
    DisplayName = displayName;
    Password = pwd;
    CreatedOn = created;
    ExpiresOn = expires;
}

So I hope you’re now thinking and maybe can already see the benefit of DDD. You see this mysterious Password data type. In DDD terms, that’s called a Value Object. Next post is about value objects.