I have never understood the effect CQRS seems to have on developers ? It's a pattern that should be treated and applied with absolute cautiosness.
---
A command feeds a data store, without ever returning anything back.
Given a query a system returns some relatively useful model back.
Honestly, why is this so interesting ?
Sure, you might want to add some autonomous component(s) that manipulate data before returning them.
A good example of applying CQRS to a small part of a system is e.g
1. Data is constantly being written (commands) into the system, excessively.
2. You want to present a "snapshot" of the data, because trying to do it "realtime" for all your users will demand too many resources, and your system will come to a hault.
3. You create the "snapshot" every 10 seconds, from code in an autonomous component, and then store it as a serialized object in a data store. Like a cache sort of.
4. When the query asks the system, the system loads up the "snapshot", deserialze it, and returns that.
Here you have two independent read and write systems. That's CQRS for you.
We use NestJS/CQRS in our property management app. Here's how we implement it:
1. There are 2 event handlers for "writes". One handler writes normalized data in a PostgreSQL DB. The other handler writes denormalized data into Firestore.
2. Our frontend uses Firestore so mutations are reflected realtime in the frontend. We never found a need for the command to return data. There is also no need for complex queries in Firestore since our data is denormalized and optimized for reads.
3. The PostgreSQL DB is useful for reporting and complex queries. Our frontend app displays this data only in the reports area.
So far, I don't see how things can get confusing with this pattern.
> A command feeds a data store, without ever returning anything back.
You are describing CQS (Command Query Separation), a la Bertrand Meyer, rather than CQRS. The only thing that CQRS says is that the read and write paths are different. It does not preclude a command from returning a response - or for that matter being handled synchronously.
because different patterns have different side effects and hence misapplying different patterns could yield different degrees of impact. Can’t speak for the parent commenter but cqrs can have nasty interplay between side effects. Commonly you would need to build interfaces that are aware of eventual consistency between read/write models, and your data scheme is definitely going to be impacted by the design choice. Versus misapplying an in-code pattern which might be an all-internals refactor to “undo the damage” it would take a lot to walk back a cqrs system.
Given a query a system returns some relatively useful model back.
Honestly, why is this so interesting ?
Sure, you might want to add some autonomous component(s) that manipulate data before returning them.
A good example of applying CQRS to a small part of a system is e.g
1. Data is constantly being written (commands) into the system, excessively. 2. You want to present a "snapshot" of the data, because trying to do it "realtime" for all your users will demand too many resources, and your system will come to a hault. 3. You create the "snapshot" every 10 seconds, from code in an autonomous component, and then store it as a serialized object in a data store. Like a cache sort of. 4. When the query asks the system, the system loads up the "snapshot", deserialze it, and returns that.
Here you have two independent read and write systems. That's CQRS for you.
Do not apply this pattern with a loose hand.