Define intelligence. Say you took a human brain and kept it alive in a mad scientist's pickle jar. Let's assume the brain's wired up so it can hear and speak, it's got an idiot savant's memory, and someone has just read it the internet.
What do you think the most impressive things are that the brain could do, that GPT-4 couldn't ?
Statistics is collecting and analyzing numerical data. But what is the numerical data quantifying? It's quantifying a second thing. If everything were "just statistics" then there would be nothing left to take statistics of. This would constitute "infinite regress" or a "turtles all the way down" argument.
https://en.wikipedia.org/wiki/Turtles_all_the_way_down
>What do you think the most impressive things are that the brain could do, that GPT-4 couldn't ?
This depends on what we consider "impressive." The human brain can control an entire human body, for one. Even if we take away the body, GPT-4 fails all sorts of basic arithmetic and reasoning problems. Additionally, the human brain is organic, analog, holographic, and reacts to external electromagnetic stimulation.
You haven't defined intelligence, nor answered the question.
Controlling a body is not relevant to a brain in a jar, nor does it detract from the intelligence of someone like Stephen Hawking.
We're talking about intelligence - a capability - so applying labels like organic, analog, etc is irrelevant. The question is what can it DO, not how is it built.
> Controlling a body is not relevant to a brain in a jar, nor does it detract from the intelligence of someone like Stephen Hawking.
I never said Steven Hawking wasn't intelligent, I was just pointing out obvious differences between the human brain and GPT-4. GPT-4 is unable to solve basic arithmetic problems without employing a traditional calculator. It also can't accurately summarize long books. It also can't program its own GPT-4.
> You haven't defined intelligence
> We're talking about intelligence
If you want to talk about intelligence so badly, perhaps you should at least attempt to define it for yourself first? I don't like using poorly defined words, which is why the word "intelligence" didn't appear anywhere in my previous comment.
> The question is what can it DO
Is that your measure of intelligence? If something does less, are they then less intelligent? Because, by that standard, the body is integral to intelligence.
GPT-4 doesn't do great at math, but it can ace the bar exam and do plenty of far more complex things. However, individual skills or knowledge aren't the same as intelligence. You can be intelligent and still not know something until told/shown, or not know how do do something until you've tried and practiced.
The whole discussion is about intelligence. I was replying to OP. MY definition of intelligence is degree of ability to apply prior experience to correctly predict future outcomes. However my definition isn't particularly relevant here since you seem to want to side with OP and say that GPT-4 isn't intelligent - so it's YOUR definition of intelligence that would be needed support your position.
BTW, I'm not sure why you think GPT-4 couldn't code a Transformer (i.e. a GPT-3/4 type model). Yesterday I was talking to GPT-4 about difference in scaling strategy between GPT-2 and GPT-3 and detailed operation of decoder-only Transformers, and it appears very capable, and it can certainly code. Again, a specific skill like being able to code isn't necessary to be called intelligent, but if that's your litmus test then GPT-4 passes.
A traditional static program could ace the bar exam, or even a well-prepared stack of flash cards. We wouldn't say the flash cards are exhibiting intelligence though, only their creators.
> The whole discussion is about intelligence. I was replying to OP.
OP was replying to Scientific American's article making numerous unfounded claims of "intelligence." So the party you should be asking for a definition is Scientific American.
> MY definition of intelligence is degree of ability to apply prior experience to correctly predict future outcomes
And what constitutes a "prediction" exactly? If someone fails to catch a baseball, would you say it's still possible they correctly predicted how to catch it? Because, if not, that would mean control over a body is integral to intelligence. If yes, then any object could be claimed intelligent but lacking in bodily function. If something predicts the future without applying prior experience, does that make it more intelligent or less intelligent?
> you seem to want to side with OP and say that GPT-4 isn't intelligent - so it's YOUR definition of intelligence that would be needed support your position.
You're trying to put words in my mouth, but I will play along. I'll say intelligence is the ability to autonomously create increasingly complete and consistent axiomatic systems. Since GPT-4 is digital, operating according to decisions (axioms) determined solely by external programmers and external data with little to no concern for consistency, I would say it's not intelligent. However, if a similar schema were applied to some sort of analog computer that had the ability to fluctuate or disobey its instructions then there would be more room for debate.
The way GPT-4 works is by having built a world model of the generative processes that produced the data it was trained on. The more data you train it on (and the larger and therefore more capable he model is), the better it performs - i.e the more complete and consistent this world model has evidentially become. I'm not sure where you are seeing daylight between this and your own definition of intelligence.
FWIW GPT-4, being a neural net, is more analog than not. It's driven by floating point values not 1's and 0's. The values are imperfectly calculated (limited accuracy) as computer math always is. There is also a large element of pure randomness to the output of any of these LLMs. They don't get to control exactly what words they generate ... the model generates probabilities over 10's of thousands of possible output words, and a random number generator is used to select one of the higher rated words to output. This semi-random word is then fed back into the model, for it to "generate" the next word ... it is continuously having to adapt to this randomness forced upon it.
> The more data you train it on (and the larger and therefore more capable he model is), the better it performs - i.e the more complete and consistent this world model has evidentially become.
Increasing training data doesn't increase consistency. Each data point acts as a potential new axiom, and each axiom decreases consistency. GPT-4 is trained to satisfy humans, and humans are wildly inconsistent. Even if humans were perfectly consistent, attempting to satisfy multiple different humans simultaneously results in inconsistency. Additionally, even if GPT-4 were perfectly complete and consistent it still wouldn't have reached this state autonomously. So the difference between GPT-4 and intelligence, by my definition, is night and day.
> FWIW GPT-4, being a neural net, is more analog than not. It's driven by floating point values not 1's and 0's.
Floating points are digital 1's and 0's. Adding more digits is never going to make something analog.
> The values are imperfectly calculated (limited accuracy) as computer math always is.
Agreed.
>There is also a large element of pure randomness to the output of any of these LLMs.
Strongly disagree. There isn't a single element of randomness during the training stage. We know the exact architecture of the neural net, we know the exact data it was trained on, and we know the exact beam selection algorithms used to synthesize outputs. Every single step can be simulated, traced, and recreated to achieve the exact same results. The number of steps involved might overwhelm us, but that doesn't make it random.
> They don't get to control exactly what words they generate
We do get to control it, we just lose track of the inputs and then pretend it was all out of our control. But of course every single step was willed and controlled by us. We call it "random" for personal convenience, not because its actually true.
These models don't output sequences - which is where you'd use beam search - they output a single word at a time. The output is a set of probabilities (from a SoftMax) which is then sampled at a given sampling "temperature" (degree of randomness).
There's no point discussing it when you obviously don't have clue how these models work, won't listen when you're told, and just prefer to make stuff up.
Beamsearch is just one example, the same applies to top sampling and greedy search. Focusing on one approach suggests you've missed the actual point: if you know how a given output is synthesized, then its not autonomous. You're trying to nitpick as an excuse to avoid a substantive response. If you want me to agree with you, you'll need to offer a counterpoint. Saying things like "It's driven by floating point values not 1's and 0's." and conflating pseudo randomness with actual randomness does not inspire confidence.
I was curious what your definition of intelligence was such that you thought GPT-4 doesn't exhibit it. It's your opinion - doesn't have to agree with mine.
You seem to place an importance on whether the models are entirely predictable or not, which is why I pointed out that the output is randomly sampled.
You could choose to use a truly random hardware random number generator, and it would make zero difference to the system, empirically or philosophically.
I don't believe there is such a digital generator but if there were it would make the massive difference of being unpredictable (because there would be irrational numbers impossible to fully simulate ahead of time) from the perspective of all of its creators, which would then beg the question of who made its decisions, which would spark debate about its autonomy, which would then qualify it as intelligent as per the definition you requested earlier.
you are falling for the most basic marketing tricks in the book. just because something calls itself "trueRNG" does not mean it is. there's no possibility of those devices outputting an irrational number.
Experience wires our brains by forming/strengthening/weakening synapses based on correlated activity, starting with perception.
In other words, our brains are tuned according to the statistics of experience, just as a Transformer's weights are tuned by the statistics of the training set.
Define intelligence. Say you took a human brain and kept it alive in a mad scientist's pickle jar. Let's assume the brain's wired up so it can hear and speak, it's got an idiot savant's memory, and someone has just read it the internet.
What do you think the most impressive things are that the brain could do, that GPT-4 couldn't ?