0 Comments

So now you had an introduction to DDD you’ve probably gotten enthusiastic and started right away, of course! And then you ran into a couple of problems. As mentioned, your domain model must (always) be in a valid state. However, for some reason you were not able to.

Let’s for example, create a domain model for an appointment:

Pretty straight forward right? There’s a nice couple of properties, and a constructor accepting a title, start- and end date.

Now the problem here is that as soon as the domain model is instantiated, it may be invalid. This is because an appointment has a start- and end date on which validation rules apply. But in this situation, there’s no way I can validate the start and end date. You can validate the start- and end date in the constructor, because both the start and the end date are passed to the constructor. The problem is, when editing the appointment.

The user made a mistake entering the appointment and needs to change the start- and end date of the appointment. When you fetch the domain model from a data store and start changing the dates, there’s no way to verify if a value is valid or not. If you pass the start date, it’s validated against an old end date and vice versa. In this situation, you want to use value objects.

Let’s refactor the above domain model so it holds a value object with which we can make sure the domain model is always valid. First, we create a value object called DateRange. Take a peek at the following code :

The DateRange object holds a Start- and an End date for a date range. The Appointment Domain Model is changed, so it holds a DateRange object property named Schedule. Now if a user wants the change the appointment, you create a new DateRange object containing the start- and the end date and pass it to the SetSchedule method. This method accepts the DateRange object and validates if the start date is actually earlier than the end date. And now everybody is happy, and our domain model is always valid.

0 Comments

So here's some thought about DDD. I really love the thought and principles of DDD (Domain Driven Design) and I really recommend looking into it. That's why it is time for a new blog. Let's call it a practical introduction to DDD for C# developers.

This is the first post of a series. This post is an introduction to DDD and how to build a domain model.

So, what is DDD? You probably know the meaning of the abbreviation by know, but what does it really mean? The answer to that question is easy yet complicated. DDD is a huge thing and has a whole lot involved, but basically you're just dividing functionalities of your system in separate domains. In the classic example of a web shop, the catalogue, the basket and the order process would all live in a separate domain. This may also be the reason why DDD and Microservices are such a good marriage, however leveraging the power of DDD doesn't necessarily mean your technical architecture must be Microservices. You can enjoy the advantages of DDD in a huge monolith as well.

All functionalities that you pack in a domain is called the bounded context. When starting a Microservices architecture you probably may want each bounded context in a separate Microservices although this isn't true for all situations, be sure to evaluate your decisions. Now in this world of DDD, there's also someone called the domain expert. This guy is the smartest in class of a given domain and can tell you everything about it. Compared to agile/scrum you may identify the domain expert as a Product Owner, but for a specific domain. Some domains have an expert that is actually the same person. It is this crucial piece of information where you can get confused. Having different experts for different domains may also introduce a difference in terminology. In DDD, we think that's fine… For example, an entity may be called a User in the first domain, but a Customer in the second domain, although they originated from the very same entity.

Bounded contexts, as the word says, have a huge boundary around them. This means that all functionalities and infrastructure involved with a domain are separated from other domains. Different domains for example, should not share the same data store. This may become a little challenging when dealing with the situation that a certain entity should live in multiple domains, for example the user and the customer. A messaging system must be configured to synchronize the changes between domains. Of course in case a data store is separated, which is (again) recommended. Eventual consistency is very important. Be sure to have a good solution in place. If a user changes his email address in the user service and places an order, you don't want the order service to send a confirmation email to the old address. The new email address should be synchronized to the order service so it 'knows' the new address. One important rule of DDD is that only one domain can change a certain entity. So if an email address belongs to a user, only the user service can change it. All domains may use the email field of a user, but only one can change it.

So what’s in it for me?

So in a couple of brief paragraphs, I summed of a couple of fundamentals of DDD. These help you along the way making decisions as you go. A lot of rules to keep in mind, there must be a benefit somewhere… And oh yes there is… Why would you be writing the DDD way, what’s the advantage why why why?

Well, the answer to that is basically given in the previous paragraphs. Lets point them out…

There are not many company processes known by a single person. No C-Level manager of a huge online web store like Amazon knows the details of the packaging process. The packing process manager does. So making this guy the domain expert of the packing process in your software makes sense. Also, the packing software will then probably contain terminology and names known by the ‘packaging process guys’. There are no translations between the domain expert and the software solution. Centralizing knowledge is key, because with that the business is capable of ensuring that understanding the software is not locked in ‘tribal knowledge’. This means that the information about what the software does is open and everyone can contribute. The developer is not (anymore) the only one who knows the entire process of a business.

And finally I think a well designed piece of software that uses the principles of DDD is way more easy to maintain compared to traditional techniques. And I experienced less ‘fixing one moving part, breaks another’ moments. All the moving parts are still in place, but not dependant of each other anymore.

Your first domain model

So the basics are easy. I want to create a domain model for a user. The user has an ID, name, email address and a password. Then I also want to track a created and expiration date. So I start with a couple of properties:

public Guid Id { get; private set; }
public string DisplayName { get; private set; }
public string EmailAddress { get; private set; }
public Password Password { get; private set; }
public DateTimeOffset CreatedOn { get; private set; }
public DateTimeOffset ExpiresOn { get; private set; }

Note that the setters of all properties are private. This is to prevent external systems from changing the values of the properties. You may want the EmailAddress property to be validated for example. If the EmailAddress property is public (and thus settable for other classes) you cannot guarantee that value of the property is always correct, and therefore you can not guarantee the valid state of the domain model. So instead, all setters are private so nobody can harm the correct state of our domain model. Now to change the email address, we need to add a method that does so.

public void SetEmailAddress(string value)
{
    if (string.IsNullOrWhiteSpace(value))
    {
        throw new ArgumentNullException(nameof(value));
    }
    if (!Equals(EmailAddress, value))
    {
        if (value.IsValidEmailAddress())
        {
            EmailAddress = value;
        }
        else
        {
            throw new ArgumentException($"The value {value} is not a valid email address");
        }
    }
}

You can see that all validations for an email address (or the validations required for this system) are done inside the SetEmailAddress() method and the valid of the EmailAddress property only changes when the new email address is valid according to the business rules. These business rules by the way, are defined by the domain expert.

I think a domain model has two kinds of constructors, one constructor is to create a new object, for example a user. The second one is to re-produce an existing object from (for example) a data store. The difference is (in my opinion) than when creating a new user, you pass the minimal required fields to a constructor to create a new valid domain model. In this example, the email address of the user is mandatory. Let’s say it’s the only mandatory field. Then the constructor of a new user will accept only one parameter, the email address. The constructor will create a new instance of the domain model class, call the SetEmailAddress() method to set the passed email address and return the new created object. This way, all validations on the email address are validated so when everything runs fine, we end up with a model only containing an email address, but it’s a valid domain model.

public User(string emailAddress)
{
    Id = Guid.NewGuid();
    SetEmailAddress(emailAddress);
    CreatedOn = DateTimeOffset.UtcNow;;
}

Now if you have more information available about the user, let’s say his display name and a password, you create additional Set methods like the SetEmailAddress() method, validate the passed information and then change the property value as soon as everything is fine. You also see that I add some default values as well.

Now you can pass that user to a repository somewhere in order to store is somewhere safe. Now in case you want to change information of a certain user, you fetch that information from the data store and reproduce the domain model.

This procedure uses the second constructor. The second constructor accepts all fields in the domain model and will instantly create it.

public User(Guid id, 
    string emailAddress, 
    string displayName,
    Password pwd, 
    DateTimeOffset created,
    DateTimeOffset expires)
{
    Id = id;
    EmailAddress = emailAddress;
    DisplayName = displayName;
    Password = pwd;
    CreatedOn = created;
    ExpiresOn = expires;
}

So I hope you’re now thinking and maybe can already see the benefit of DDD. You see this mysterious Password data type. In DDD terms, that’s called a Value Object. Next post is about value objects.

0 Comments

So as promised, I'm back with part 2 of the Valet Key design pattern blog. In part 1 I showed how the design pattern works, how it works on Azure and how to implement the pattern using ASP.NET Core. In this second part, I'll show how to implement the client solution. I chose to use an Angular client. Not only because I like Angular, also because it's nice to have a javascript implementation directly uploading into the cloud.

Let's briefly resume part 1

I wrote a backend system, that creates a Secure Access Signature (or SAS). That SAS is suitable for a specific endpoint. This endpoint is represents a BLOB in Azure Storage. The SAS grants write-access to that endpoint. The backend system returns the endpoint and the corresponding SAS.

Make use of the Azure Blob Storage javascript

Microsoft distributes a package of javascript files that enable you to communicate with an Azure Storage account. You can download that package at https://aka.ms/downloadazurestoragejs. I used version 2.10.103 in my project. If you unzip the package, you actually only need the azure-storage.blob.js in the bundle folder.

Next up, Angular

Now you need to create an angular project. If you don't feel like creating this project all by yourself, you may want to take a peek at my sample project on Github.

To create an Angular project, download and install NodeJS 10 and open a console window.

When I create an Angular project, I like to configure SASS as default style type, and for this project let's disable testing. Also I added the Angular router which is a huge overkill for this project.

npm i @angular/cli -g
ng new upload-example --styles=scss --skip-tests --routing=true
cd upload-example
ng serve –o

You just installed the Angular CLI (globally), created a new Angular project. A new angular project will be created in a folder with the same name, so we navigate to the newly created project and start serving it with a development server. The -o parameter opens the project in your default browser when the compiler is done and the dev server is started.

Now in the assets folder, create a new folder 'scripts' and add the azure-storage.blob.js file. Then make sure Angular outputs that script by adding it to angular.json

"scripts": ["src/assets/scripts/azure-storage.blob.js"]

Then replace all HTML in the app.component.html file with <router-outlet></router-outlet>.

Adding services

OK, now the hard part. I found this clever guy Stuart who had a great implementation of uploading files to blob storage. I extended his services to make a call to the backend prior to uploading, in order to get a valid endpoint and SAS. So in the app folder, I created a new folder services and added azure-storage.ts and blob-storage.service.ts.

Coming home

Then I added a module and component for the landing page:

ng g module pages/home
ng g component pages/home/home

This generates a Home Module and a Home Component in it. Be sure to import the HomeModule in your app.module.ts, else Angular will not be able to show the component.

The home.component.ts contains some methods that allow the upload to happen. But first, let's add a file select box to the home.component.html file. Note the (change) event passing changes to the home component.

onFileChange(event: any): void {
   this.filesSelected = true;
   this.uploadProgress$ = from(event.target.files as FileList).pipe(
     map(file => this.uploadFile(file)),
     combineAll()
   );
}

As you can see, the event handler loops through all files selected and fires this.uploadFile() passing the selected file. The uploadFile() method accepts the file and requests an endpoint and a SAS from our backend system.

uploadFile(file: File): Observable<IUploadProgress> {
   return this.blobStorage
     .aquireSasToken(file.name)
     .pipe(
       switchMap((e: ISasToken) =>
         this.blobStorage
           .uploadToBlobStorage(e, file)
           .pipe(map(progress => this.mapProgress(file, progress)))
       )
     );
}

The service contains a method aquireSasToken() which calls our backend. The backend uses a valid Azure Storage Account connection string to create a SAS for a certain endpoint and returns this information. Then the uploadToBlobStorage() method is called, which uses the SAS to determine where to upload to, and also accepts the file. The mapProgress() keeps track of the upload progress and shows it in percentages.

One final note

valet-key-corsJust like your ASP.NET Web Application, an Azure Storage Account is protected with CORS. So in order to upload blobs using a JavaScript system like Angular you need set some CORS rules in Azure. This Angular project runs on localhost port 4200, so I added this origin, accepting all methods and all headers. Note these settings are not recommended in production environments.

0 Comments

Once in a while you run in to a solution without you even noticing there was a problem. For me, the Valet Key pattern was such a solution. I used to work at a company where we planned to create import functionality for transactions. we're talking about loads and loads of transactions. The software system runs as an ASP.NET Web API, in a multi-tenant environment.

A straight-forward implementation of such an import, would potentially block additional requests and thus other tenants to the service because of the server being busy importing a huge file. We tackled this problem to post the import job on a queue and run it distributed. What I didn't realize, was that the actual file upload itself was also a potential disaster waiting to happen.

Tell me more, old man

I'm developing software for over 2 decades. This means that for me there was a time it was actually a break-trough that you're able to upload files through the browser. Today, there's nothing special about uploading files and nobody cares about what technique works for you behind the scenes. For example, uploading a new profile picture to facebook or instagram sets a whole lot of services in action. It's unlikely your picture perfectly meets the requirements for a profile picture so probably some scaling and optimizations will be done before your picture apprears in your profile. Some online services even mention it takes some time for your picture to be processed. We all just accept is, it's nothing special.

How did we do that in the past?

In the past, when we uploaded pictures to an endpoint on the webserver and handle the incoming stream, or store the incoming stream as a file. Then we would immediately process the incoming data (in case of a picture, resize and optimize it). After all was done, we would send a response to the client with the result image, or an error message in case something went wrong.

The new kid on the block

So here comes the Valet Key pattern. We now like to work with cloud solutions making your software live in a more and more distributed environment. Given the previous scenario, you just want to prevent the server from having to deal with that 'load' of work. When some single person, uploads a descent file to the server, you're good. However, when your service goes viral and thousands of people start uploading profile pictures just taken with their brand new full-frame SLR camera, meaning 25+ Mb profile pictures, your services may become a little busy so to say. And that is exactly what the Valet Key allows you to do. It allows you to upload (or let's say access) files (or in azure terms BLOB's), without the intervention of your web service.

Good stuff, tell me how!!

To implement the Valet Key, in my case for the Microsoft Azure Cloud, you need access to a storage account. As soon as you have a valid cloud connection string with enough permissions, you're good to go.

In case of an upload

alt text Let's assume you have a website allowing users to upload a profile picture, you call the backend (your webserver) for access to storage. Your webserver creates a reference to a container in blob storage (compare a container with a folder on your hard drive for now). Then your server will reserve a spot in that container (called a BlockBlob). This is in fact a reference to a file (that doesn't exist yet). Then you will create temporary (write) access to the file. Doing so, gives you an access policy which you return to the client. Your client now has a valid endpoint to upload a file to, and an access policy that allows the service to write to that location.

var blob = cloudContainer.GetBlockBlobReference(blobName.ToString());
var policy = new SharedAccessBlobPolicy
{
     Permissions = SharedAccessBlobPermissions.Write,
     SharedAccessStartTime = DateTime.UtcNow.AddMinutes(-5),
     SharedAccessExpiryTime = DateTime.UtcNow.AddMinutes(20)
};
var sas = blob.GetSharedAccessSignature(policy);
return new ValetKeyDto
{
     BlobUri = blob.Uri,
     Credentials = sas,
     BlobName = blobName.ToString()
};

The story continues

OK, so now we have some nice server side feature allowing us to upload stuff directly into a blob container on an Azure Storage account. Good! But how about the client? Good question! The client is part of a next blog, the Valet Key Part 2.