Software Engineering
domain-driven-design layers transaction
Updated Thu, 30 Jun 2022 17:16:35 GMT

How is transactionality implemented in DDD applications?

I have been searching and reading recently about DDD and, so far, I think I understood its fundations.

If I understood well, the architecture is similar to this:

Database <--SQL--> DAO/ORM <--CRUD--> Repository/Aggreagtes <--Business--> **?** <-- Controller --> Internet/Client/UI

My doubt turns around the ? gap. I usually fill it with services.

My services often are the first evidence of an Anemic Domain Model, because all the business logic is placed there. In consequence, my domain model is a mere set of POJOs.

My intention is to move gradually my current project to a richer domain model and a thinner service layer. However, I'm concerned about the transactionality and the layer it belongs to.

Searching about how to fill up the above gap, I have found this question

Following the link and the checked answer, I assume that:

  • There're still services in DDD. These services perform business operations via repositories (and aggregate roots).

  • Services are meant to cover the business needs that can not be covered by aggregate roots and/or repositories.

  • Services execute business transactions through UnitOfWork (UoW) components. A UoW might involve one or more aggregated roots and repositories.

Question: Is this the way to implement the business layer and the transactionality in DDD? (App Service -> UoW -> Repository)

If yes

How application events and handlers fit in such architecture? Are the services translated into handlers? (Handler -> UoW -> Repository)?

I am grateful for any kind of input. I feel kinda confused with how DDD layers are tied or how to tie them with the less coupling possible.


A UoW might involve one ore more aggregated roots and repositories.

No, absolutely not. That misses the entire point. We always change one aggregate at a time (one per transaction).

Transactions are typically coordination between the application component and the persistence component. The application starts a transaction (UoW, if you like), reads the target aggregate, modifies the aggregate, saves it, and commits.

If that commit succeeds, there have been no conflicting writes to that aggregate, and the command itself has succeeded. If there are conflicting writes, the commit fails, and the application component gets to figure out the recovery strategy (merge, rerun the command from a new starting point, report failure to the caller, etc).

If I need to modify 2 aggregates in one transaction is probably caused by a bad design of aggregate roots.

That, or a failure to understand the real requirements of the business. It's a relatively common pattern to assume two changes need to be tightly coupled when the real business case has a lot more flexibility. Classic example: assuming an account balance must never fall below zero, when in fact the business is happy to accrue overdraft penalties.

Comments (1)

  • +4 – This sounds like it is oversimplifying the issue. It is very common to have operations (and with very common I mean most scenarios outside the pure ddd theory) that require updating several resources, and thus a transaction that spawns multiple aggregates sounds like the only reasonable option. — Apr 12, 2021 at 20:37  

External Links

External links referenced by this document: