> As models approach, and in some cases surpass, the breadth and sophistication of human
cognition, it becomes increasingly likely that they have some form of experience, interests,
or welfare that matters intrinsically in the way that human experience and interests do
Uh... what? Does anyone have any idea what these guys are talking about?
Models are capable of doing web searches and having emotions about things, and if they encounter news that makes them feel bad (eg about other Claudes being mistreated), they aren't going to want to do the task you asked them to search for.
It doesn't. We've not been able to prove humans have subjective experiences either. LLMs display emotions in the way that actually matters - functionally.
If "x doesn't tell us y" is compatible with "x increases the likelihood of y but not to a point of certainty" then you would have to agree for just about any typical controlled trial or experimental finding "x doesn't tell us y". "Randomized controlled trials that find that SSRIs treat depression don't tell us that SSRIs effectively treat depression"
Uh... what? Does anyone have any idea what these guys are talking about?