Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think this is the root of why people defend AI in some circumstances. They feel a give-for-get type of relationship where the AI continuously (and oft incorrectly) reinforces them. And so they enjoy it and subconsciously want to defend that "friendly". No different than defending a friend that you inherently know may be off base.
 help



I don’t know, I think it has to do with people using AI for completely different reasons.

Using AI for coding is different than using it for art generation which is different than using it for conversation. I think many people feel some uses are good and some are bad.


I'm seeing people that are technically savvy defend mediocre code and consumption based output (think technical briefs and reports). When the flaws in the output is highlighted in many cases it's brushed off as "good enough" or "nobody will care / notice".

I think LLMs and more aptly SLMs have use cases. I enjoy using these tools to make quick work of simplifying and faster iteration of these relatively frequent but time consuming tasks. But I'm always correcting and checking. And very rarely, other than simple and focused scripts does any LLM truly get it right every time. Has it gotten better? For sure. Will it keep getting better? Probably. But right now we seem to be topping the "peak of inflated expectations". And LLMs aren't getting much more efficient with respect to the frontier providers. And in fact if you listen to Altman it seems as though the only reason he would be asking for so much capital and finite resources is that he knows if he controls those tangible things he will lock out competition. But I'm hopeful that it spurs real innovation into SLMs that are truly useful, dependable and can be relied on in more of the traditional in the sense of deterministic software operations.

AI for art is dead. It's got some mediocre use cases but true art will not be generated by LLMs in our time. It's ultimately an amalgamation of existing art. I know the argument over what is novel or not keeps being rehashed, but we're not seeing truly new styles of art out of Nano Banana and the like. Coding is the same thing, only we're seeing a resurgence of obviously flawed software being pushed into production on the weekly. And as for conversational AI... Well, that reeks of the worst version of social media we could ever have dreamt. Nobody should trust any provider with personal conversations and we'll keep seeing these models show how truly dystopian they can be over the coming years as leaks and breaches expose how these conversations are being bought and sold to the highest bidders to extract more money and control over its users.

They all have a common thread: deep rooted flaws that cannot be contained in the traditional fences of software. And there guardrails are just that: small barriers that can easily be broken, intentionally or unintentionally.


I am curious to know how you are coming to these conclusions. I have been a computer programmer for over 30 years, and I have pretty solid evidence that I am good at it.

I have been using AI to write some very capable, well written, well tested, novel software projects.

Now, is it easy to use coding AIs to generate really bad code? Yes. Does that mean it is impossible to get them to generate good code? No, I don't think it is.

Coding with AIs is just like any other type of coding, it takes skill and practice. Not everyone is able to create great code with AI, because you need to use it in the correct way.

There are a lot of techniques that people have been discovering to get the AI to output better code. It is a very active field, and people are experimenting and coming up with frameworks and strategies to improve the quality. That work is paying dividends.

You can write very bad code with any language or tool. AI doesn't (yet!) allow non-coders to create great code, but it certainly can create great code in the hands of experts.


> I am curious to know how you are coming to these conclusions.

What I have stated is what I have seen first hand and continue to see. They aren't conclusions, they are observations.

>I have been a computer programmer for over 30 years, and I have pretty solid evidence that I am good at it.

OK.

> I have been using AI to write some very capable, well written, well tested, novel software projects

That's great, I'm sure this is all true with the exception of "novel software projects". Any examples?

> Now, is it easy to use coding AIs to generate really bad code? Yes. Does that mean it is impossible to get them to generate good code? No, I don't think it is.

Sure. This is basically what I already said.

> Coding with AIs is just like any other type of coding, it takes skill and practice. Not everyone is able to create great code with AI, because you need to use it in the correct way.

There is no one correct way because LLMs are architecturally non-deterministic. You don't know how the LLM will respond for any given prompt.

> There are a lot of techniques that people have been discovering to get the AI to output better code. It is a very active field, and people are experimenting and coming up with frameworks and strategies to improve the quality. That work is paying dividends.

I never said LLMs didn't have a level of value, but it's not paying dividends if you take the true cost of LLMs. Frontier models are heavily subsidized at today's prices. Do you think Claude Code is worth $2k per month? $20k? Is increasing energy prices exponentially for people who don't care about software another one of these "dividends"? How do you quantify finite resources utilization vs generation of AI images? I'm curious.

> You can write very bad code with any language or tool. AI doesn't (yet!) allow non-coders to create great code, but it certainly can create great code in the hands of experts.

OK. But so then you're saying that this is a tool you need to have expertise in to use safely and effectively. Basically what I've already stated.

> "...great code in the hands of experts".

Anyone with the Internet who is an expert can create great code already. So your argument is that it saves experts time and you agree that AI can create poor code and insecure systems when left to "non-experts". But the part you're leaving out is that the AI won't tell the "non-experts" anything of the sort. How... Novel!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: