The biggest beef I currently have with microservice architectures is that they are more annoying to work with when working with LLM's. Ultimately that is probably the biggest limiting factor for microservices in 2026, the tooling for multi repo setups is there (i've been using RepoPrompt for this with really good effect), but fundamentally LLM's in their default state without a purpose designed too like this suck at microservices compared to a monorepo.
You could also turn around and say that it's a good context boundary for the LLM, which is true, but then you're back at the same problem microservices have always had: they push the integration work onto another team so that developers can make it Not Their Problem. Which is, honestly, just a restatement of the exact thing you just said framed in a different way.
I think your statement can also be used against event driven architecture - having this massive event bus that controls all the levers of your distributed system always sounds great in theory, but in practice you end up with almost the exact same problem as what you just described, because the tooling for offering those integration guarantees is just not nearly as robust as a centralized database.
I have found mostly the opposite but partly the same. With the right tooling LLMs are IMO much better in microservice architectures. If you're regularly needing to do multi-repo PRs or share code between repos as they work, to me that is a sign that you weren't really "doing microservices" before adding LLMs to your project, because there should be some kind of API surface that you can share with LLMs in other repos, and cross-service changes should generally probably not be done by the same agent
Even if the same dev is driving the work, it's like having a junior engineer do a cross-service staggered release and letting them skip the well-defined existing API surfaces. The entire point of microservices is that you are making that hard/introducing friction to that stuff on purpose so things can be released and developed separately. IMO it has an easy solution too, just direct one agent per repo/service the way you would if you really did need to make that kind of change anyway and wanted to do it through junior developers.
> hey push the integration work onto another team so that developers can make it Not Their Problem
I mean yes and no, this is oftentimes completely intended from the perspective of the people making the decision to do microservices. It's a way to constrain the way people develop and coordinate with each other precisely because you don't want all 50 of your developers running amok on the entire codebase (especially when they don't know how or why that code was structured some way originally, and they aren't very skilled or conscientious in integrating things maintainably or testing existing behavior).
> so that developers can make it Not Their Problem
IMO this is partially orthogonal to the problem. Microservices doesn't necessarily mean you can't modify another team's code. IMO that is a generally pretty counter productive mindset for engineering teams where codebase is jealously guarded like that. It just means you might need to send another team a PR or coordinate with them first rather than making the change unilaterally. Or maybe you just want to release the things separately; lately I find myself wanting that more and more because past a certain size agents just turn repos into balls of mud or start re implementing things.
This is never going to be the case, if you're finding it there's something really weird/wrong going on. Even with OpenAPI defs, if you're asking an agent to reason across service boundaries they have to do language translation on the fly in the generation, which is going to degrade attention 100%, plus LLMs are just worse at reasoning with openapi specs than language types. You also no longer have a unified stack, instead the agent has to stitch together the stack and logs from a remote service.
If your agent is reasoning across service boundaries you should be giving it whatever you'd normally use when you reason across service boundaries, whether that's an openapi spec or documentation or a client library or anything else. I don't see it as any different than a human reasoning across service boundaries. If it's too hard for your human to do that, or there isn't any actual structured/reusable way for human developers to do that, that's more a problem with how you're doing microservices/developing in general.
> they have to do language translation on the fly in the generation, which is going to degrade attention 100%,
I'm not completely sure what you're alluding to but if you don't have an existing client for your target service, microservices/developers going to have to do that anyway because they're serializing data to call one microservice from another. The only exception would be if you starting calling the other application's code directly from the other's in which case again you're doing microservices wrong or shouldn't even be doing microservices at all (or a lead engineer/other developers deliberately wanted to prevent you from directly integrating those two applciations outside of the API layer and it's WAI).
None of these seem like "microservices are bad for agents" problems to me, just "what I'm doing was already not a good fit for microservices/I should just not do microservices anymore". Forcing integration against service boundaries that are independently built/managed is almost the entire point as far as I'm concerned
Think of it like this. If you're multilingual but I ask you a hard question with sections in different languages, it's still going to tax you to solve the problem over having the question be asked in one language.
If you codegen client wrappers from your specs that can help, but if something doesn't work predictably the indirection makes debugging harder (both just from a "cognitive" standpoint and from inability to directly debug a unified system).
I prefer FaaS + shared libraries over microservices when I have to part things out, because it gives you the independence and isolation of microservices, but you're still sharing code across teams and working with a unified stack.
It is quite annoying of the lock in. I prefer using GitLab for private projects but it means if I want to FOSS those I now need to support two different platforms to have FOSS projects and my own stuff
In general you shouldn’t be letting your ci system’s job orchestration be handled in YAML. It’s just too complex of a concept to try and capture in some half baked YAML DSL
the pattern I recommend is to use CI system only at the event trigger layer e.g. setting up invocation as a response to webhooks. Then you drop down into whatever orchestration layer you implement yourself to do the actual work. So in my configurations, the ci yml is very minimal, it essentially says "set up env vars, inject secrets, install minimal deps and invoke `ci` command of whatever adult system you so choose" (Dagger would be one example).
What UI are you looking for outside of log streaming? If you want to see a DAG of your workflows and their progress you can use other systems as you say (Dagger has this), or your orchestration layer can implement that.
If you want to use the orchestration component of your ci tooling, you always can do so, and get your DAG viewer, but you have to accept all of the constraints that come with that choice
One of the reasons I like how lightweight GitTea/Forgejo is allows me to develop with argocd locally. Spin up a kube cluster with tilt, bootstrap forgejo, bootstrap Argo and point it at forgejo, now I can test Appset devs with sync waves locally
I maintain knowledge bases in Obsidian compatible repositories and one thing that's been great is having a hand rolled validation schema that validates against the AST output produced by remark. I call it a "markdown body grammar". So I can at least prevent people from doing edge casey things at build time when they produce documents
I haven’t thought about it this way but yeah I totally agree. Most of the major consequences from my CC info getting leaked can be dealt with without major long term impact to my life. The same cannot be said about PII currently
Well, for non-standalone apps you also need some degree of ongoing support, at the very least to patch the security bug in your app or update libraries that have those.
and when whatever framework or big lib you use decide "well we're making new version, everything you made will break, have fun", that needs engineering too.
Buy once model is pretty much only for standalone, offline apps. Anything online and you have to start to worry about supporting new TLS versions or having to update certificate store (if app for some bizzare reason ignores system one)
The key with debit cards is the incentive misalignment. With credit, it’s the bank that loses out, not you. With debit, it’s you. Until the consequences are equaled by legislation, there’s no world where they get equal treatment by the bank
it's transaction fraud insurance. like any insurance, you pay a small amount regularly, and in return get protection in case of large sporadic loss.
points are just premiums: some insurance consumers are a greater risk, and so pay more.
any convenience features are built on top of the insurance product: _because_ all players are covered, _therefore_ i can make online purchases. _since_ (i have a justified expectation that) i am not liable for fraudulent use of my account number, _therefore_ i can read it to a customer service rep over the phone.
we can of course debate whether 2% is a good price for this coverage! but there must be some price paid here -- if the insurance broker doesn't collect it, the scammers will. this, after all, is the real tragedy.
My friend, as a rule of thumb, every additional player im a transaction takes a cut.
So assuming the rest is all the same, you just paid exactly what you would've paid with a debit card. Because the merchant had to raise prices to accommodate the fee. And that's with the credit card company not taking a cut and we all know that's not true.
The merchant chose to not offer a lower debit card / cash price because the merchant bets that people will pay a higher price if they use credit cards, so the merchant incentivizes credit card usage by asking for the same price for credit card and non credit card payment.
There are merchants that do not do this, such as Target, which charges 5% to use a credit card. Insurers/tutors/daycares/schools/healthcare providers/contractors/gas stations/restaurants/governments/utilities are also known to frequently charge more for credit card payments.
Any seller can choose to offer a lower price for debit card / ACH / Zelle payments if they want to.
Even ignoring the cut taken by the credit card issuer, why do I have to go through some random card to get a 2% discount, when prices could just be 2% lower across the board by default?
To add on to that: if someone fraudulently uses your credit card, it's the issuer's money that's now missing and they need to get it back. If someone fraudulently uses your debit card, it's your money that's now missing that you need to get back. Hopefully things don't start overdrawing your account in the meantime.
Yes we'll open a dispute. Yes we'll give you a credit immediately. But then we just take the sellers word for it that they're trying to make it right and charge you anyway.
This is my one singular experience with a dispute but that's with a big bank getting almost all of my transactions over the course of years....
A very big percentage of credit card expenses in the US come from cards with rewards programs, so you get money/gift cards/travel discounts in exchange for using the credit card instead of the debit card. A lot of this is funded from much higher interchange fees: It's ultimately the merchant you buy from funding most of the rewards. Since those very high fees are nowadays illegal in the EU, European credit cards cannot have this kind of generosity, and incentives are very different.
How does this work when using a US credit card in the EU? I assume the merchant still pays the lower interchange fee, so are the banks just betting that customers won’t do a large proportion of spending abroad?
So L2 is great, the issue is calling L2 "Full Self Driving"
reply