Hacker Newsnew | past | comments | ask | show | jobs | submit | multjoy's commentslogin

Well done you.

Catch all the security holes while you were reviewing it, or did you leave those to the machine as well?


In many jurisdictions (the UK, in particular) charging people to apply for work is specifically illegal.


>The thing is, it will soon become trivially easy to modify your own "text editing environment,"

elisp enters the chat


It says it finds errors.


It gives references that you can then verify manually. I wasn't advocating for a 100% automated process.


Gear Acquisition Syndrome is a very different problem. Even if you haven't cured the issue the new synth was meant to fix, at least you have a new synth.


It is an almost universal fact that dealing with retail customers is something that is left to the lowest paid, lowest status workers and often outsourced and now increasingly left to LLM chatbots.

While you obviously can't have highly paid engineers tied up dealing with user support tickets, there is a lot to be said for at least some exposure to the coal face.


> While you obviously can't have highly paid engineers tied up dealing with user support tickets,

You obviously can, that's one of the more visceral way to make them aware of the pain they cause to real people with their work, which sticks better, or simply serves as a reminder there are humans on the other side. There are even examples of higher paid CEOs engaging, we can see some of that on social media


Given that LLMs have no understanding of the text, what is the point of it?


To shuffle it up in semi-random ways that make you think. If you're determined to hate LLMs for any reason or any purpose, just think of them as an elaborate game of Exquisite Corpse or Ultimate Mad-Libs.

You don't have to think LLMs are smart or real people to think of them as useful. I love it when I can make an idea clear enough in text that an LLM can completely regurgitate it and build upon it. I also love it when an LLM trips over and misses the one real novelty that I've slipped into something; what better for an originality test than trying to choke an automatic regurgitator?

Transistors have no understanding of what I'm doing, but somehow I still find them useful.


I feel people talk to LLMs in chat format just so they feel there's someone listening. This puts thats in a journaling/blogging context, hopefully delivering the same value in a unique context.


What do you mean by "no understanding of the text"?


Isn’t there an OS with a text editor that is based on lisp?


It would encourage Starlink to put yet more crap into Low Earth Orbit and see them fill the atmosphere with barely understood pollutants.


AI doesn’t necessarily mean an LLM, which are the systems making things up.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: