Sounds like you didn't lose the ability, you lost motivation. Why learn Rust, you say, if an LLM can crank out a Rust app for me, and it will be good enough?
LLMs may have removed the critical need for a SW engineer to know details, like the syntax of Rust or the intricacies of its borrow checking semantics. But LLMs, I maintain, didn't remove the critical need for an engineer to learn _concepts_ and have a large, robust library of concepts in your head. Diverse, orthogonal concepts like data structures, security concerns, callbacks, recursion, event driven architecture, big O, cloud computing patterns, deadlocks, memory leaks, etc etc. As long as you are proficient with your concepts, you will easily catch up with the relevant details in any given situation. Once you've ever seen recursion, for example, you will find no trouble recognizing it in any language.
That's the beauty of LLMs : you don't _have_ to be good at technical details any more. But you still have to be very good with concepts, not just to be able to use LLMs properly, but also _be in control_ of their work. LLM slop is dangerous not because of incorrect details like bad syntax. It is dangerous because it misplaces concepts: it may use a list where you need a hash map and degrade performance, it may forget a security constraint and cause a data leak, or it can be specific where it needs to be general, etc. An engineer needs to know and check the concepts if they want to remain in control. (And you absolutely do want that.)
But it is impossible, or very impractical, to just learn an abstract concept out of thin air. The normal way to learn a concept is to see its concrete instantiation somewhere, in all its detailed glory, and then retain its abstract version in your head.
So, the only way to stay relevant and stay in control is to have a robust concept library in your mind. And the only way to get that is to immerse yourself in many real technical situations, the details of which you must crack first, but free to forget later. That is learning, and that is still important today in the age of LLMs.
I feel this OpenClaw stuff is a bit like the "crypto" of agentic AI. Promise much, move fast and break things, be shiny and trendy, have a multitude of names, be moderately useful while things go right (and be very useful to malicious actors), be catastrophic and leave no recourse when things inevitably go wrong.
Ultimately it’s a solution in search of a problem. Nobody really wants to over-automate their workflows and life if the tradeoff is even a modest decline in accuracy.
Objects that have sharp edges generate higher frequency harmonics when agitated, because lower-size features resonate on higher frequencies (like shorter strings ring on higher pitch). Objects that are round resonate on low frequencies only. The "kiki" sound has more high frequency content than the "bouba" sound, and it's no mystery why the brain associates one with the other.
That's one theory. Another one I can think of is that sharp edges are scary, and most distress calls are high pitched.
Also, the thing about high frequencies and sharp edges lead to a contradiction: babies are more round than adults and produce higher pitched sounds, this is almost universal across all species.
There are other tentative explanations, such as how the vocal tract acts when producing these sounds, with "bouba" sounds being the result of smoother movement more reminiscent of a round shape.
"kiki" is not just higher pitched, it is also "shaped" differently if you look at the sound envelope, with, as expected, sharper transitions.
So to me, the mystery is still there. Is is the kind of thing that sounds obvious, in the same way that kiki sounds obviously sharper than bouba, but is not.
> Also, the thing about high frequencies and sharp edges lead to a contradiction: babies are more round than adults and produce higher pitched sounds, this is almost universal across all species.
It's more in terms of harmonic content than the pitch fundamental. There are more harmonics from a thing with sharp transitions than there are in a thing with rounded transitions regardless of the fundamental pitch. Compare harmonic content of a pure sine wave (it's just the fundamental) with that of a square wave, which has an infinite series of higher harmonics.
Babies are also smaller, which means higher fundamental pitch.
> "kiki" is not just higher pitched, it is also "shaped" differently if you look at the sound envelope, with, as expected, sharper transitions.
Exactly!
EDIT I think this is interesting: it also applies to images as well, not just sound. You can "low pass filter" a photograph and it'll reduce some of the detail, smoothing out transitions (typically used for noise reduction). Detail is high frequency information (or high frequency noise depending on whether you want it or not.)
Hens make it occasionally when laying eggs, but it's also the rooster alarm sound. The "cock-a-doodle-doo"/crowing sound is more the all-clear/I'm-a-rooster-here-I-am/flock-assemble cry.
When there's a threat, the rooster switches to a loud, BAWK BAWK BAWK alarm.
Hollow things are common, and of interest to many animals. If I thump a log and it makes a noise like it has a hollow space (low tones), then it may contain an animal nest or a beehive & honey, or it may be something I could use as a box or basket or shelter.
Maybe animal sounds count? Warning sounds tend to be loud, sharp, and high-pitched; and when ignored, they might end with more material sharp things in your skin. I can't recall any animal with a soft warning sound.
> The "kiki" sound has more high frequency content than the "bouba" sound
And where did you get that from?
In non-tonal languages the pitch conveys almost no information and people speak at very different ones (and for instance a male saying "kiki" will say it at lower frequencies than a woman saying "bouba" most of the time) so I find your affirmation very dubious.
> and it's no mystery why the brain associates one with the other.
Specialists of the field find that mysterious but some smartass on HN disagrees.
> > The "kiki" sound has more high frequency content than the "bouba" sound
> And where did you get that from? In non-tonal languages the pitch conveys almost no information and people speak at very different ones (and for instance a male saying "kiki" will say it at lower frequencies than a woman saying "bouba" most of the time) so I find your affirmation very dubious.
You misunderstand the post. It has nothing to do with the voice of the speaker.
Long drawn-out sounds have lower frequency components than short-lasting sounds. A pin drop is REALLY high-pitched; a moan has at least some low-pitch components (but may still be high-pitched, too - more often called a "keening" than a moan). It's not about intonation; it's a mathematical consequence of the relationship between frequency and time-domain incidents, typically measured with Fourier transforms.
Some of the research, including this paper, is trying to get at the question of whether a species' sensitivity to the bouba-kiki effect might be at the root of language or not. Since it seems accepted that chickens do not have language in any meaningful sense of that term, finding that they still show this effect decouples it from "the origins of language".
bouba-kiki has been previously shown not to exist for some set of other primates. given the general sense that they are closer to some form of language than baby chickens, its presence in the latter and absence in the former would suggested that it is not necessary.
all this research could be deeply flawed, however.
It's a hypothesis. How would you prove or disprove that it's because of that? (and I would say, a priori, it's not utterly obvious that the brain would relate spacial and temporal frequencies like this)
Do you remember how these things were called social NETWORKS, as in something you navigate and explore? Then they gradually became social MEDIA, as in something you consume...
Your website landing page is great. No stock photo hipsters drinking coffee, no corporate fluff amid whitespace wasteland. Just straight to the point. Rare sight today.
LLMs may have removed the critical need for a SW engineer to know details, like the syntax of Rust or the intricacies of its borrow checking semantics. But LLMs, I maintain, didn't remove the critical need for an engineer to learn _concepts_ and have a large, robust library of concepts in your head. Diverse, orthogonal concepts like data structures, security concerns, callbacks, recursion, event driven architecture, big O, cloud computing patterns, deadlocks, memory leaks, etc etc. As long as you are proficient with your concepts, you will easily catch up with the relevant details in any given situation. Once you've ever seen recursion, for example, you will find no trouble recognizing it in any language.
That's the beauty of LLMs : you don't _have_ to be good at technical details any more. But you still have to be very good with concepts, not just to be able to use LLMs properly, but also _be in control_ of their work. LLM slop is dangerous not because of incorrect details like bad syntax. It is dangerous because it misplaces concepts: it may use a list where you need a hash map and degrade performance, it may forget a security constraint and cause a data leak, or it can be specific where it needs to be general, etc. An engineer needs to know and check the concepts if they want to remain in control. (And you absolutely do want that.)
But it is impossible, or very impractical, to just learn an abstract concept out of thin air. The normal way to learn a concept is to see its concrete instantiation somewhere, in all its detailed glory, and then retain its abstract version in your head.
So, the only way to stay relevant and stay in control is to have a robust concept library in your mind. And the only way to get that is to immerse yourself in many real technical situations, the details of which you must crack first, but free to forget later. That is learning, and that is still important today in the age of LLMs.
reply