Hacker Newsnew | past | comments | ask | show | jobs | submit | sumitkumar's commentslogin

I feel it is about being disinterested than about being good. the ones who were not interested(whether good or bad) and were trapped in a job are liberated and happy to see it be automated.

The ones who are frustrated are the ones who were interested in doing(whether good or bad) but are being told by everyone that it is not worth it do it anymore.


I have seen more reactions of people about this tech than actual implementations made possible which pushed the boundaries further. It is an amplifier of technical debt in mostly naive(people experienced in bad patterns) user base.

Take anthropic for example, they have created MCP/claude code.

MCP has the good parts of how to expose an API surface and also the bad parts of keeping the implementation stuck and force workarounds instead of pushing required changes upstream or to safely fork an implementation.

Claude code is orders of magnitude inefficient than plainly asking an llm to go through an architecture implementation. The sedentary black-box loops in claude code are mind bending for anyone who wants to know how it did something.

And anthropic/openai seems to just rely of user momentum to not innovate on these fundamentals because it keeps the token usage high and as everyone knows by now a unpredictable product is more addictive than a deterministic one.

We are currently in the "Script Monkey" phase of AI dev tools. We are automating the typing, but we haven't yet automated the design. The danger is that we’re building a generation of "copy-paste" architects who can’t see the debt they’re accruing until the system collapses under its own weight.


Almost like we are making devs dependent on the tool. Not because of its capabilities but because there lacks an understanding of the problem. Like an addiction dependency. We are all crack addicts trying to burn more tokens for the fix.

One more thing to add is that the external communication code/infra is not written/managed by the agents and is part of a vetted distribution process.


I was also startled when I learned about the human ancestor who was the first to see a mirror.

The brilliance of AI is that it copies(mirrors) imperfectly and you can only look at part_of_the_copy(inference) at a time.


It seems it is true for gemini because they have a humongous sparse model but it isn't so true for the max performance opus-4.5/6 and gpt-5.2/3.


It is not about making it yourself but a tradeoff between how much it can be controlled and how much has seen the real world. Adding requirements learned by mistakes of others is slower in self-controlled development vs an open collaboration vs a company managing it. This is the reason vibe-coded(initial requirements) projects feels good to start but tough to evolve(with real learnings).

Vibe-coded projects are high-velocity but low-entropy. They start fast, but without the "real-world learnings" baked into collaborative projects, they often plateau as soon as the problem complexity exceeds the creator's immediate focus.


Microservices is bad for teams without discipline to implement "separation of concerns". They hope that physical network boundaries will force the discipline they couldn't maintain in a single codebase.

While microservices force physical separation, they don't stop "Spaghetti Architecture." Instead of messy code, you end up with "Distributed Spaghetti," where the dependencies are hidden in network calls and shared databases.

Microservices require more discipline in areas like:

Observability: Tracking a single request across 10 services. Consistency: Dealing with distributed transactions and eventual consistency. DevOps: Managing N deployment pipelines instead of one.

For most teams Modular monolith is often the better "first step." It enforces strict boundaries within a single deployment unit using language-level visibility (like private packages or modules). It gives you the "Separation of Concerns" without the "Distributed Spaghetti" network tax.


> Observability: Tracking a single request across 10 services

I'm not sure if this is a discipline issue in the way that domain driven design, say, is a discipline issue. If you instrument requests with a global ID and point at tool at it then you're basically done from the individual team perspective.


Uh, that's not my experience at all.

Sure you can say e.g "this property wasnt set in this request while being processed by this service managed by this team", but why it wasn't set will inevitably need multiple teams, each doing in-depth analysis how such as state could've been caused because they always inevitably become distributed monoliths - the former is being provided by the instrumentation, but the latter isn't (and even the former is not perfect, as not all frameworks/languages have equal support)


> and shared databases.

According to my understanding this is one of the reasons why microservices were invented, to prevent shared databases?


pydantic/pydanticAI in builder mode or llamaindex in solution architect mode.


AI/non-AI/human/hybrid: It doesn't matter which one is the writer.

It's the reader who decides how good the writing is.

The joy which the writer gets by being creative is of no consequence to the reader. Sacrifice of this joy to adopt emerging systems is immaterial.


Even thin single use plastic works. The first time I saw it it was surreal.


The thinner the better --- thermal conductivity is what keeps the temperature from rising too far.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: