Hacker Newsnew | past | comments | ask | show | jobs | submit | davebranton's commentslogin

Interesting how many people "Like AI" because it's good at all the jobs other than the one they happen to make a living doing.

Did you hear about the screenwriters school in which the professors said to avoid AI for writing, but it's great for storyboards. And the storyboard school where the professors said the opposite?

The reality is that AI isn't actually "good" at anything. It produces passable ersatz facsimiles of work that can fool those not skilled in the art. The second reality of AI is that everyone is busy cramming it into their products at the expense of what their products are actually useful for.

Once people realise (1), and stop doing (2), the tech industry has a chance of recovering.


Yeah, I think I heard about that. Within certain domains it is certainly a useful tool. I would say things like online search are much nicer now (in that asking an AI is equivalent to searching online but it summarizes it for you). Online search fits the strengths of LLMs nicely, but right now it's being sold as a silver bullet, which it's not.

I have no design talent but wanted to help my partner with some charts. She was making them in Excel. I had Claude Code build them as web docs and they look quite good. Probably had to give it around thirty instructions about changes, which was pretty inefficient, but then again I couldn't have created them myself and they look far nicer than the charts she got from Excel.

It's really just about recognizing what they can do well and applying them in the right moments.


It's a version of the Gell-Mann Amnesia effect.

Indeed. But they won't get to "AGI", because that goal isn't even remotely defined. A "human-level" intelligence implies a large number of properties that cannot exist inside an inference machine. Dreams, for example, might be considered to be a part of "human-level" intelligence. Will the machine dream?

What happens if you turn a "human-level" intelligence off? Did you kill someone?

AGI is a pipe dream - and moreover it's not even something that anyone actually wants.


>Will the machine dream?

You seem to be mixing up intelligence and consciousness. Not only does intelligence exist outside of humans, and even mammals, but it exists outside of brains and even neurons. For example, slime molds have fascinating problem solving abilities: https://www.nature.com/articles/nature.2012.11811

It is clear that whatever we are...creating/growing with LLMs, it is very unlike human intelligence, but it is nonetheless some type of intelligence.


agi just means a machine, system or whatever that can do anything as least as well as a human. The details dont matter as much as its ability to match humans in everything they are paid money to do.

And obviously if such a system existed, the benefits (and risks) would be enormous, though the risks are smaller if you control it vs someone else, which is why every company is racing towards it.


I made such a thing ten years ago with an ESP8226 and a basic iOS app built using HTML and javascript. It still works perfectly.

The valves were 12v solenoids from ali express, and the plumbing was from the hardware store. I almost guarantee it was far, far cheaper than this project.


Precisely. As I wrote in my assessment of AI for my workplace;

"Your unique human voice is more valuable than a thousand prompt-driven LLM doggerels."


The more you write, the less this will be true. The more you write, the better you will become at it. Using an LLM to write is like sending a robot to the gym for you.

The more you use an LLM to write for you, the worse you will become at writing yourself. There is simply no other possible outcome. It's even true of spellcheck - the more you use a spellcheck the worse you become at spelling. I know this for a fact because I can no longer spell for shit. However, spelling is to writing as arithmetic is to mathematics. I also can't add up, but I have a degree in pure mathematics.

LLMs are a cancer on human thought and expression.


> LLMs are a cancer on human thought and expression.

LLMs help to express what many people dont have the energy or ability to express. It also has a broader scoped view of protocol...It does not have emotions, which often leads to less than optimal discourse.

In many ways, it help those who are challenged in discourse to better express themselves...rather than keeping silent or being misunderstood.


It doesn't matter.

The guidelines are perfectly clear, no matter the outcome of your thought experiment. Hacker News wants intelligent conversation between human beings, and that's the beginning and the end of it.

If you want LLM-enhanced conversation then I'm sure you will find places to have that desire met, and then some. Hacker News is not that place, and I pray that it will never become that place. In short, and in answer to "Do we prefer text with the right "provenance" over higher quality text?".

Yes. Yes, we do.


Why would somebody read something that somebody couldn't be bothered to write? This article is AI slop.


Personally, I found the spoof song in the middle of very dry writing to be jarring. But I didn't think it sounded AI written.


What stood out as AI written? It felt like a well-written article by an SME to me.


Not the original commenter, but I noticed it too. I guess it's hard since AI is trained on human content, so presumably humans write like this too, but a few that stood out to me:

> Five entire countries vanished from GreyNoise telnet data: Zimbabwe, Ukraine, Canada, Poland, and Egypt. Not reduced — zero.

> An attacker sends -f root as the username value, and login(1) obediently skips authentication, handing over a root shell. No credentials required. No user interaction.

> The GreyNoise Global Observation Grid recorded a sudden, sustained collapse in global telnet traffic — not a gradual decline, not scanner attrition, not a data pipeline problem, but a step function. One hour, ~74,000 sessions. The next, ~22,000.

> That kind of step function — propagating within a single hour window — reads as a configuration change on routing infrastructure, not behavioral drift in scanning populations.

(and I'm not just pointing these out because of the em dashes)

GPTZero (which is just another AI model that can have similar flaws and is definitely not infallible, but is at least another data point) rates my excerpts as 78% chance AI written, 22% chance of AI-human mix.

To me at least, the article still seems to be majority human-written, though.


Also, one of the authors is "Orbie", which looks like an AI name, and if you go and read through some of the recent posts, all of the posts with that author feel very LLM-y and bland, and the posts without that author are much more normal.

GGP has a good eye.


The deep, profound, cruel irony of this post is that it was written by AI.

Maybe if you work in the world of web and apps, AI will come for you. If you don't , and you work in industrial automation and safety, the I believe it will not.


I was thinking the same thing, but I thought I was being too cynical given it was a post lamenting about all the cognitive abstractions we have created.


The linked article on medium was also written by AI, which immediately disqualifies it from being interesting or useful.

"And the worst part? Apple didn’t provide a switch to turn it off."

Now see, this is AI. A normal human being would write, "Apple didn't even provide any way to switch off this non-feature" - for example. AI always, for reasons that are likely neither interesting nor especially illuminating, writes like this. Unnecessary and stupid stylistic choices everywhere.

Look, if you cannot be bothered to write something, why on God's Good Earth would anyone bother to read it?


Is this the new em dash witch hunt?

I sometimes write like that because I noticed for regular people, they tend to pay more attention if some things are written a specific way. It’s like an FAQ.

I’ll continue to use bolded titles and bullet points when writing for a regular audience.


If AI does this, it's because it's ingested the last 25 years of bad internet headlines. Written allegedly by humans.


And to be honest? It’s really annoying indeed.


I like it. A lot of times I just skim, so sentences like this let me know when to pay more attention to some point the author is trying to make.


I'm not saying you're wrong about this article being AI-written, but I know people who write like this, and I hate it.


Exactly. AI wrote this way because humans wrote this way. Just as humans used em-dashes long before AI was a thing.


If I see another AI-written trash article I am going to scream. Overlong, overwritten garbage. People used to write, and there was personality in that writing. Now people believe it's acceptable to generate reams of utter formless shite and post it on the internet.

If you cannot be bothered to write something, why on God's good earth would you expect anyone to be bothered to read it?


I'd normally agree, but this is a case I don't see often -- despite the form being terrible the content is good. I certainly would strongly prefer the same post with better writing, but if the entire 2019 internet were replaced with articles like this (on orthogonal topics/micro-topics) I think it'd be a better place.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: