Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Maybe they have tried and found it lacking?

I have an on again off again relationship with LLMs. I always walk away disappointed. Most recently for a hobby project around 1k lines so far, and it outputs bugs galore, makes poor design decisions, etc.

It's ok for one off scripts, but even those it rarely one shots.

I can only assume people who find it useful are working on different things than I am.



Yeah I'm in the holding it wrong camp too. I really want LLMs to work, but every time I spend effort trying to get it to do something I end up with subtle errors or a conclusion that isn't actually correct despite looking correct.

Most people tell me I'm just not that good at prompting, which is probably true. But if I'm learning how to prompt, that's basically coding with more steps. At that point it's faster for me to write the code directly.

The one area where it actually has been successful is (unsurprisingly) translating code from one language to another. That's been a great help.


I have never been told I'm bad at prompting, but people swear LLMs are so useful to them I ended up thinking I must be bad at prompting.

Then I decided to take on offers to help me with a couple problems I had and, surprise, LLMs were indeed useless even when being piloted by people that swear by them, in the pilot's area of expertise!

I just suspect we're indeed not bad at prompting but instead have different kinds of problems that LLMs are just not (yet?) good at.

I tend to reach for LLMs when I'm (1) lazy or (2) stuck. They never help with (2) so it must mean I'm still as smart as them (yay!) They beat me at (1) though. Being indefatigable works in their favor.


My experience tracks your experience. It seems as if there are a few different camps when it comes to LLMs, and that’s partly based on one’s job functions and/or context that available LLMs simply don’t handle.

I cannot, for example, rely on any available LLM to do most of my job, because most of my job is dependent on both technical and business specifics. The inputs to those contexts are things LLMs wouldn’t have consumed anywhere else. For example specific facts about a client’s technology environment. Or specific facts about my business and its needs. An LLM can’t tell me what I should charge for my company’s services.

It might be able to help someone figure out how to do that when starting out based on what it’s consumed from Internet sources. That doesn’t really help me though. I already know how to do the math. A spreadsheet or an analytical accounting package with my actual numbers is going to be faster and a better use of my time and money.

There are other areas where LLMs just aren’t “there yet” in general terms because of industry or technology specifics that they’re not trained on, or that require some actual cognition and nuance an LLM trained on random Internet sources aren’t going to have.

Heck, some vendors lock their product documentation behind logins you can only get if you’re a customer. If you’re trying to accomplish something with those kinds of products or services then generally available LLMs aren’t going to provide any kind of defensible guidance.

The widely available LLMs are better suited to things that can easily be checked in the public square, or to help an expert summarize huge amounts of information, and who can spot confabulations/hallucinations. Or if they’re trained on specific, well-vetted data sets for a particular use case.

People seem to forget or not understand that LLMs really do not think at all. They have no cognition and don’t handle nuance.


Don’t get them to make design decisions. They can’t do it.

Often, I use LLMs to write the V1 of whatever module I’m working on. I try to get it to do the simplest thing that works and that’s it. Then I refactor it to be good. This is how I worked before LLMs already: do the simplest thing that works, even if it’s sloppy and dumb, then refactor. The LLM just lets me skip that first step (sometimes). Over time, I’m building up a file of coding standards for them to follow, so their V1 doesn’t require as much refactoring, but they never get it “right”.

Sometimes they’ll go off into lalaland with stuff that’s so over complicated that I ignore it. The key was noticing when it was going down some dumb rabbit hole and bailing out quick. They never turn back. They’ll always come up with another dumb solution to fix the problem they never should have created in the first place.


I do the designing, then I write a comment explaining what happens, and the LLM then adds a few lines of code. Write another comment, etc.

I get very similar code to what I would normally write but much faster and with comments.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: