Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

For the record, I'm agnostic to whether or not consciousnses is possible upon silica. I think it is pretty safe to say though that it likely is an emergent property of specifically-configured complex systems, and humanlike intelligence on silica is certainly something that might qualify.

I don't think appealing to whether or not inanimate objects may be conscious is sufficient to discount that we are toying with a different beast in machine learning. And, if we were to discover that inanimate objects are in-fact conscious, that would be an even greater reason to reconfigure our society and world around compassion.

I agree that LLMs are a great breakthrough, and I think there are many reasons to doubt consciousness there. But I would suggest we rest on our laurels for a bit, and see what we can get out of LLMs, rather than push to create something that is closer to mimicking humans because it might be more useful. From the evil perspective of pure utility, slaves are quite useful as well.



The issue so far is that this "closer to mimicking humans" doesn't actually seem to give performance gains. So, why bother?

Existing LLMs are already trained to mimic humans - by imitating text, most of which is written by humans, or for humans, and occasionally both. The gains from other types of human-mimicry don't quite seem to land.

The closest we got to "breakthrough by mimicking what humans do" since pre-training on unlabeled text would probably be reasoning. And it's unclear how much of reasoning was "try to imitate what humans do on a high level", and how much was just trying to generalize the lessons from the early "let's think about it step by step" prompting techniques.

It's likely that we just don't know enough about the human mind to spot, extract and apply the features that would be worth copying. And even if we did... what are the chances that the features we would want to copy would turn out to be the ones vital for consciousness?


For the most part I think we agree. There is a lot of uncertainty around the mechanics of consciousness, a lot of reasons to doubt the existence of those mechanics in current AI, and a lot of failed endeavors to use biological mimicry to improve AI state of the art.

I don't think that precludes remaining concerned with the continued push to make current models more humanlike in nature. My initial comment was spurred by the fact that this paper is literally presenting itself as solving the missing link between transformer architectures and the human brain.

Here's to hoping this all goes toward a better world.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: