Microservices are supposed to solve the scaling problem.
Aka the solution to a monolith only being able to handle so many requests / second is microservices that potentially have replicas.
Now let’s assume the old shop example Product and Order models and services.
If every microservice has its own dedicated data store, but you have let’s say 2 replicas of the OrderService (aka 3 running) every single OrderService would have its own truth.
Therefore the database(store) of a logical service like ProductService and OrderService needs to be synchronized. But that also means that the amount of requests being able to be handled depends on how resilient and performant the backing data store is.
However I imagine that would be insanely expensive.
Imagine you have a whole dbms for every single entitiy. The cost, not only in hardware, would go through the roof.
The solution is apparently sending messages. But for that you also need a messaging service. Distributed databases like Kafka or NATS turn messages into data stores but are really just a distributed database.
I don’t really know what the question is, I guess it’s
“What is the point of microservices when all you need is to scale out your database backend?” and not only that now you have to watch your microservice instances.
I could also imagine a different topology.
Take OrderService, every single service in this logical group has its own embedded boltdb store that it stores on disc. Every single service is connected to a NATS messaging server group.
What does the OrderService do in detail?
It listens for messages. What messages? To answer that we must know its purpose.
The OrderService handles creation of orders, so the messages should be:
So a CreateOrder message gets posted and is accepted by 1 of 3 microservices.
MS1 has created the order and has saved it to disc. MS1 now sends the OrderCreated message including the saved entity. All other OrderService microservices listen to this message and save the entity to their local disc. Everything is in sync.
But I could imagine race conditions being a problem there.
Effectively this means doing what the distributed database already does only manually.
Effectively what is keeping me from just not doing logical separation of entity to service? I could just scale the backend database and replicate the monolith, since all source of truth is in the database anyway.
I could just do away with all the messaging and security between microservices and keep things simple and internal. Seems more cost-efficient to me in running, observing and developing.
Why do I need microservices?
I can imagine one answer being: “Multiple teams can work on their individual service”, ok but can’t they just create a subdirectory in the monolith instead of creating additional infrastructure cost?
And then I read “You can mount your individual REST services in your reverse proxy / API gateway”, but isn’t that what you’re already doing in your monolith?
The more I read and think about microservice architecture and even driven architecture the more I get the feeling is just something to drive business to “Cloud” services instead of keeping it cheap and simple.