Hacker Newsnew | past | comments | ask | show | jobs | submit | abarker's commentslogin

This is a paywall-free link the NYT Opinion account posted to Reddit: https://www.nytimes.com/interactive/2025/03/15/opinion/forei...


The technology could also be used by oppressive governments to harass dissenters (and anyone else they might target). Essentially modern, high-tech COINTELPRO.


Also he had been under FBI surveillance, which he was apparently aware of.


The Australian Human Rights Commission discussed many of these issues in its March report, "Protecting Cognition: Background Paper on Neurotechnology":

https://humanrights.gov.au/our-work/technology-and-human-rig...


It came to light because some financial records were sent to the CIA's Retired Records Center for some reason and escaped destruction. See page 5 of [1].

[1] https://www.intelligence.senate.gov/sites/default/files/hear...


This reminds me of the Buddha's comparison of the Dhamma to a raft, which one does not carry around on his or her back after crossing over on it.

https://www.accesstoinsight.org/tipitaka/mn/mn.022.than.html


It's a common motif in spiritual systems. Compare with archangel Gabriel ('the intellect') accompanying Muhammad on his mystical ascension (Miraj) stopping at the boundary of the 'central tree' and claiming he could go no further as it would burn its wings. (In a continuing series of cosmic winks please note that this ascension is called 'The Ladder' in Arabic and we also have a tree ...)

https://en.wikipedia.org/wiki/Sidrat_al-Muntaha


This is worth reading. Nagarjuna had similar ideas, going beyond the metaphor of raft.

1] Nāgārjuna, Nietzsche, and Rorty’s Strange Looping Trick

Philosophers have lots of tools and tricks up their sleeves. They, of course, can use formal argumentation, they can employ all sorts of thought experiments to elicit various intuitions, they can lay out examples, dilemmas, dialectics, and do a whole host of other things. But I want to talk about one particular trick that only a select few philosophers have employed. This trick involves wrapping everything up in a philosophical system only to have that system knock itself down by its own internal means, and doing all in order to produce some sort of anti-philosophical result. I’ve come to call this the “looping” trick, and it’s one of the most philosophically curious things that I’ve ever stumbled upon.

[1] : https://absoluteirony.wordpress.com/2014/09/17/nagarjuna-nie...


There's a little more information in this Reddit thread [1]. In it someone references an earlier work by Jason Zimba [2].

[1] https://www.reddit.com/r/math/comments/11zptjo/an_impossible...

[2] https://forumgeom.fau.edu/FG2009volume9/FG200925.pdf


Conversely, these models open up philosophical questions of "exactly what a human is" beyond language abilities. How much of what we think, do, and perceive comes from the use of language?


I think most intelligence is in the language. We're just carriers, but it doesn't come from us and doesn't end with us. We may be lucky to add one or two original ideas on top. What would a human be without language?

Language models feed from the same source. They carry as much claim to intelligence, it's the same intelligence. What makes language models inferior today is the lack of access to feedback signals. They are not embodied, embedded and enacted in the environment (the 4 E's). They don't even have a code execution engine to iterate on bugs. But they could have.

And when a models does have access to massive experimentation, search and can learn from its outcomes, like AlphaGo, then it can beat us at our own game. Trained just in self-play mode, learning from verifying outcomes, was enough to surpass two thousand years of history, all of our players put together.

I think future code generation models will surpass human level based on massive problem solving experience, and most of it will be generated by its previous version. A human could not experience as much in a lifetime.

This is the second source of intelligence - experience. For language models it only costs money to generate, it's not a matter of getting more human data. So the path is wide open now. Who has the money to crank out millions of questions, problems and tasks + their solutions?


> I think most intelligence is in the language. We're just carriers

This is such a profound idea. I’ve been wondering about that for a while. Is there anywhere to read up on it?


Isn't that just a rephrasing of the Sapir-Whorf hypothesis? If so then it's old and thoroughly debunked. Language features don't seem to influence the way we think, which is another way of saying that intelligence and language are different things. If you want to read about it you can look at the history of this idea in the 20th century from when it was proposed by linguists in the 1930s all the way up to the time it became discredited, as there are many research papers and even books on the topic.

One of the really troublesome problems with Sapir-Whorf and derivatives is that they led directly to some very nasty totalitarian behaviors. In "1984" a core policy of the Big Brother government is Newspeak, in which language changes are (believed to be) used to control the thoughts of the population and establish eternal power for the Party. This wasn't merely a quirky bit of fictional sci-fi, it was directly inspired by the actual beliefs of the hard left. The extent to which Newspeak was an accurate portrayal of life under the Nazis and Communists is explored in "Totalitarian language : Orwell's Newspeak and its Nazi and communist antecedents".

https://searchworks.stanford.edu/view/2016479

Today it's known that Sapir-Whorf isn't supported by the evidence, but there's still a strong desire on the political left to manipulate thought through language. Stanford's recent "Elimination of Harmful Language" initiative is a contemporary example of this intuition in practice. It doesn't work but it sounds so much easier than engaging in debate that people can't let it go.

tl;dr to the extent this has been studied already, intelligence is not in the language.


I’m thinking of something different but you raise some good points regardless.


They don’t “open” shit, the linguistic turn came and largely went long before they existed.


I wonder how effective it would be at finding exploitable security holes in code.


The relation is virtually always bidirectional, and often forms complex, dynamic feedback loops. Simple causal analysis can fail in such situations.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: