Are Modular Monoliths a Winner?
As a consultant, I get to see a lot of new projects starting up and evolving over time. And with projects I mean software systems that are going to be built up from scratch. Doing so is very challenging from a lot of perspectives. The business wants a go-to-market as soon as possible while the engineers aim for a solid maintainable solution that is flexible enough to be maintained and evolve to the software solution you aim for at the horizon of your roadmap.
Are microservices starving out?
So about a decade ago, the Microservices architecture became very popular. Services like Spotify and Netflix were some great examples of how to split your system into multiple self-contained services. Suddenly, every software system became large enough to also be Spotify or Netflix. At least, that is what we thought. While in reality there was not really a good incentive to change the software architecture in the first place. Even though a lot of companies started changing their system into a Microservices architecture and ran into numerous problems and challenges. And for good reason…
Doing Microservices is HARD
And the reason is, that doing Microservices is hard. You have to have a good plan and idea about how you are going to solve messaging problems. We don’t want chatty services, so communicating from one service to another should be distributed and not a direct HTTP request. Now this messaging system sits somewhere in the middle, and now what? Let’s say I have a User service that is responsible for maintaining the users of my system. Then how do I query for the name of a user from a different service, when I should not use a direct HTTP call? This is where your next problem arises, eventual consistency. Slowly (but eventually) you will end up with a serious software system with very sophisticated messaging and communication mechanisms that are complex enough for developers to not easily understand what’s going on anymore. So the promise of Microservices is small, easy to maintain, and quick to deploy services and it is exactly that. Once you have dodged all the plumbing problems, there are situations you can take advantage of this approach but most of the time… Not for you…
Finally, the business wants your system in production as soon as possible. Not feature-complete, obviously. But an MVP (Minimum Viable Product) that clients can work with, and that will evolve into the mature solution you aim for. All this plumbing that needs to be in place for your Microservices, pollutes the road to production and demands you to pay (too much) attention to the plumbing instead of getting features complete enough to get them in production.
Then there is the Monolith
Today, it seems like the monolith solution has some nasty taste. I believe this comes from the time when we tried to build an n tier solution where there would be a presentation layer, some kind of service or core layer, then a data layer, and finally some data persistence mechanism. In my honest opinion, this approach is still valid. Then why did we fail so many times doing so?
I think the answer to this is that we failed to separate responsibilities in our system. With the rise of DDD (Domain Driven Design), developers became more and more aware of boundaries and concerns. Often, DDD is very much associated with Microservices. However, you can totally do DDD, or at least take advantage of some of its Pros without having to deal with Microservices. This means that you do think of boundaries in your software system to add separations, but do not necessarily host those separated systems separately.
Welcome to the Modular Monolith
So this means that we take some of the advantages we know Microservices can bring, but leave the complexity and plumbing out. And take some of the advantages that come with monoliths, and combine those into an architecture that we call a Modular Monolith. So let’s assume that you work on a C# software project (that is what I work with most of the time), but don’t feel left out if you’re a Java developer or whatever OOP language, because the principle will be the same.
You have a single solution, that represents your software system. And for each service that you would create as a Microservice, instead of creating an entirely new solution, create a class library in your solution. This class library will be organized using the n tier principle. So you create a service layer and a data layer or repository if you like, that connects with a data store.
Below is an example of a webshop hierarchy. Your solution would look something similar to this:
- My Webshop Solution (Solution)
- Checkout Service (Library)
- Cart Service (Library)
- Catalog Service (Library)
- Order Service (Library)
- WebShop API
Building more sophisticated communication
Given the webshop above, I would allow the first version of the system to have direct internal references to each other (if you can without making circular references). Because this saves you from having the do the least amount of plumbing. Then at some point, when your system is stable enough and due to its success, you start to see performance issues for example, you can think of making the boundaries between services more strict.
There are multiple ways to do so. One service could for example make an HTTP request to a different service, but now you already start to see the challenges that you face when creating Microservices. Because if the receiving service of the HTTP request fails, the calling service also fails because the HTTP request returns an error. Also, you don’t want services to be too chatty. So heavy communication between services back end forth is not really a good idea.
May cache help?
It may help here, to introduce a distributed cache system. So when data in the Catalog service is changed, the Catalog service pushes that change as a materialized view to the cache. Now when a different service needs that catalog data, it makes an HTTP request to the Catalog service, but with the Cache Aside pattern in place, so the data is immediately returned from the cache when it has the data present.
You can see that the amount of complexity al growing as you gradually move to more and more independent services. Because now the Catalog service not only needs to manage and maintain data in the catalog data persistence but also maintain changed data in the cache.
Messaging and offloading workloads
The next step would be to use a messaging system to distribute work. Let’s say that your webshop requires some kind of complex reporting system and reports require some time to be generated. You don’t want your users to have to wait for those reports to be generated. Instead, you will generate the reports in the background, and send a notification when a report is generated. Messaging is the way to do this. There are tons of messaging solutions, the Azure Service Bus, NServiceBus, RabbitMQ, name it. They all share the same principle, you can put a message on a queue, and some worker will grab the message and process it.
Most mature worker solutions, like Azure Functions, for example, contain sophisticated algorithms that allow the workers to scale when the number of messages is stacking up. So offloading the report generation workload to a separate system not only takes the workload away from your central API and which is now not affected by performance issues anymore, but you also created a flexible scaling solution able to handle huge loads of work.