>Just to be clear, microservices are not just separate binaries on a network. If you're not following the actual patterns of microservice architecture... you're just complaining about something else
So what you're saying is that the way to avoid this problem in a microservice architecture, is to be disciplined and follow the right patterns. Then couldn't I just follow the same patterns in a modular monolith (eg: avoid shared state, make sure errors are handled properly, etc) and get the bulk of the benefits, without having to introduce network related problems into the mix?
> Then couldn't I just follow the same patterns in a modular monolith (eg: avoid shared state, make sure errors are handled properly, etc) and get the bulk of the benefits, without having to introduce network related problems into the mix?
Sure. Microservice architecture is a set of design patterns and a discipline for knowing how to structure your applications.
Many, including myself, would argue that by leveraging patterns such as separate processes as a focal point for the architecture leads to patterns that are harder to break out of and abuse, but of course anyone can do anything.
Error handling is the easiest one. With any 'service oriented' approach, where processes are separated, you can't share mutable state without setting up another service entirely (ex: a database). Microservices encourage message passing and RPC-like communication instead, and it's much easier to fall into the pit of success.
Could you do this with functions? Sure - you can just have your monolith move things to other processes on the same box. Not sure how you'd get there without a process abstraction, ultimately, but you could push things quite far with immutability, purity, and perhaps isolated heaps.
Because engineering discipline is actually hard. Not necessarily in the "here is how you do it" sense, just in the sense of getting the buy-in from engineers and engineering leadership that will make it happen.
This is like the one thing that microservices might actually be sort of good at: drawing a few very hard boundaries that do actually sort of push people in the general direction of sanity, e.g. it's easier to have basic encapsulation when the process might be on another computer...
I cannot figure out how you can see that. RPC just adds a "Remote" on top of the "Procedure Call" part, we add a failure mode but the thought process is the same.
As witnessed by many teams, spaghetti happens just as poorly in a distributed monolith as it does in a proper monolith, it just adds latency and makes it harder to debug.
The boundaries you're imagining are not drawn by the technology nor by the separate codebases, they're drawn by the programmers making the calls. And I guarantee you that the average developer with their usual OOP exposure can understand much more easily where to draw decent boundaries following some pattern like Clean/Hexagonal/Onion/Whatever Architecture as opposed to microservices, where it's far more arbitrary to determine the concerns of a microservice, specially when a usecase cuts through previously drawn boundaries.
So what you're saying is that the way to avoid this problem in a microservice architecture, is to be disciplined and follow the right patterns. Then couldn't I just follow the same patterns in a modular monolith (eg: avoid shared state, make sure errors are handled properly, etc) and get the bulk of the benefits, without having to introduce network related problems into the mix?