> Asynchronous programming with async/await is about revealing the time dimension of execution as a first class concept
People are more likely to assume their code is fast enough and not worry about the execution time of synchronous data processing, then spend weeks investigating why the p99 latency is 5 seconds with clusters of spikes.
Async IO is almost entirely about efficiency. It's telling the OS that you can manage context switches better than it. Usually this means you're making a tradeoff for throughput over latency. That tradeoff is for efficiency is fine, but it needs to be conscious, and most of the time, you actually want lower latency.
Are you sure that the developer is the best at determining these context switches? I mean, for a low-level language like rust, sure. But for higher level programming, e.g. some CRUD backend, should the developer really care about all that added complexity, when the runtime knows just as much, if not more. Like, it’s a DB call? Then just use the async primitive of the OS in the background and schedule another job in its place, until it “returns”. I am not ahead from manually adding points where this could happen.
I think the Java virtual thread model is the ideal choice for higher level programming for this reason. Async actually imposes a much stricter order of execution than necessary.
Same as we used to do asm, but then the generated code is becoming good enough, or even better than hand written asm.
I predict the async trend will fade, as hardware and software will improve. And synchronous programming is higher level than using async. And higer level always prevail given enough time, as management always wants to hire the cheapest devs for the task.
Not sure why the downvotes. Async programming is harder than sync, as one needs not only know one's code, but also all the dependencies. Since the benefits of async are in many scenarios limited[1] I'd expect the simpler abstractions to win.
I am a CTO at a large company and I routinely experience tech leads who dont understand what happens under the hood of an async event loop and act surprised when weird and hard to debug p99 issues occur in prod (because some obscure dependency of a dependench does sync file access for something trivial).
Abstractions win, people are lazy and most new developers lack understanding of lower level concepts such as interrupts or pooling, nor can predict what code may be cpu bound and unsafe in a async codebase. In a few years, explicit aync will be seen as C is seen today - a low level skill.
[1] if your service handles, say, 500qps on average the difference between async and threaded sync might be just 1-2 extra nodes. Does not register on the infra spend.
People are more likely to assume their code is fast enough and not worry about the execution time of synchronous data processing, then spend weeks investigating why the p99 latency is 5 seconds with clusters of spikes.
Async IO is almost entirely about efficiency. It's telling the OS that you can manage context switches better than it. Usually this means you're making a tradeoff for throughput over latency. That tradeoff is for efficiency is fine, but it needs to be conscious, and most of the time, you actually want lower latency.