> The AI effect occurs when onlookers discount the behavior of an artificial intelligence program by arguing that it is not real intelligence.
> Author Pamela McCorduck writes: "It's part of the history of the field of artificial intelligence that every time somebody figured out how to make a computer do something—play good checkers, solve simple but relatively informal problems—there was a chorus of critics to say, 'that's not thinking'." Researcher Rodney Brooks complains: "Every time we figure out a piece of it, it stops being magical; we say, 'Oh, that's just a computation.'"
But don't forget the near antonym ELIZA effect, helpfully linked from the same Wikipedia page. People love to project cognitive explanations onto any complex system behaviors that they observe.
I only briefly toured the AI field over 25 years ago after the prior AI winter was old news. I remember a couple epigrams heard from specialists at the time:
"AI is just advanced algorithms", which I think is a (slightly bitter, mostly pragmatic) reflection on the AI effect by people who had had to adjust their outlooks after that AI winter.
"Play syntactic games, win syntactic prizes", which I think was meant to remind one of the ever-present ELIZA effect.
My biggest worry about this current AI hype cycle is for how it will be marketed with the same kind of obfuscation the Web 2.0 and after user-driven content distribution systems. I'm afraid most consumers do not understand how much smoke and mirrors is involved in making products seem smarter and more authoritative than they actually are. How users themselves subconsciously cover real gaps in the correctness or completeness of the products and put more faith in them than is really warranted.
What is happening here is the opposite. We have those infinitely hyped chatboxes (that, granted, can currently aimlessly chat better than me, what's impressive) that are hyped into silver bullets that can kill any monster. But all they can do is to chat, by construction.
At the same time, other AIs are constantly progressing at other, useful things, completely ignored by nearly everybody.
> But all they can do is to chat, by construction.
Perhaps, but with the ability to traverse the web and interface with applications, there might actually be a qualitative difference from chatbots of yore.
> At the same time, other AIs are constantly progressing at other, useful things, completely ignored by nearly everybody.
Well, yeah, the chatbots of today are qualitatively different from the chatbots of yore. That doesn't turn them into secretaries, no matter how much they can mislead you to think they are.
> Interesting, such as what?
Factory robots, terrain traveling, object recognition, speech recognition, voice syntesis... On the symbolic side, have you looked at modern compilers and linters?
Language models should have a huge impact on search. And yes, search is a very impactful thing. But that too requires something a bit different from a chatbot. Not a complete new technology, but different applications.
The motives are different this time. Those in the know are deliberately downplaying and misdirecting to stave off panic and political meddling. So far it seems to be working. The people I know who aren't following closely don't seem to realize how we may be on the brink of something very big. And these are very smart and technical people.
I do wonder if the intelligence agencies are already deeply involved. If OpenAI is working with them training no-guardrails AIs.
I also wonder if the CCP/PLA is pushing Baidu's AI efforts, or if Baidu is working on its own and trying to avoid official attention.
Yup, I have used tons of chatbots over the years and they all look like worthless toys compared to GPT4 . Saying that they are similar is like saying mobile phones are similar to old diaries.
50% of software engineers unemployed in 2 years, 5 years, 10 years?
Or something like 50% of the US/world population using a chat-gpt style assistant to perform a complicated mental task (aka not "Alexa, set a 2 minute timer.") once a day/week in 2 years, 5, 10?
Don't get me wrong, chatGPT's emergent intelligence is impressive and I'm playing with Alpaca 13B and other models locally, but I'm not sure it's going to be as transformative and as quickly as many here seem to think. Humans and society are inherently resistant to and slow to change.
The world bank currently pegs working age population 15-64 at 5.12B, so that'd mean chatgpt would put nearly 6% of the working age population out of a job (ofc depending on your definition of "displace").
Create a detailed counterpoint to the mentioned post. Refer to all the instances where similar sentiments towards technological advances have been false, especially in the context of AI. Cite your sources using Chicago style. Everything as a LaTex Document. Where every sentence is first writen in English and then in German translation. Mark the German translation in green. The Document shall be in A4 format. Cite the comment in the beginning. Refer to the original poster not by his Hackernews name but by "Unidentified Source 1".
That's exactly what it is despite the "gpt bros" playing it up. It's literally a scaled up Eliza, that has billions of substitutions instead of tens. People just get dazzled by it and read too far in to what it's doing
My impression too. They keep saying "just wait a year or two and...". And they might as well be right, but for now it's just a slightly better way to find information online.