Lets rephrase this differently, no one can prove whether it “understands” or not, and the same for humans. No one can prove whether consciousness is an “illusion” for humans. Your brain might just use embeddings, too.
If you ask an average person if children understand what a game is, “yes” would be a common answer, but just like an AI model it may not have enough focus to play the game properly.
I believe this is the whole point behind “the turing test”.
If you instead argued it can’t experience emotion, i would agree.
You seem to be conflating solipsism with epistemology. At the end of the day, regardless of whether you're dealing with a child, a bot, or a peer, you would never use the question "describe how to play chess" as a measure of understanding if the chosen entity understands how to play chess; you would just play chess with them. Such an inquiry would only probe if one understands how to explain how to play chess, which is not the same. One should expect an LLM model of being able to regurgitate a description of playing chess. One could even expect an LLM to regurgitate common strategies in chess, enough to fool some into thinking it understands what is happening amidst a posed match. Where it all falls apart is where the rubber meets the pavement. Get an LLM to teach your child chess, with no intervention. You'll learn rather quickly which one is capable of understanding, and which one isn't.
If you ask an average person if children understand what a game is, “yes” would be a common answer, but just like an AI model it may not have enough focus to play the game properly.
I believe this is the whole point behind “the turing test”.
If you instead argued it can’t experience emotion, i would agree.