No human is in danger of forgetting Tiananmen Square unless they didn’t know about it in the first place. Details are strewn across the Internet and in book-libraries all over the world. New generations of students and interested kids can easily learn about them.
Additionally it has been shown that making models forget things lobotomizes them, so no SOTA model can ever do that and be SOTA. They might be post-trained into pretending not to know, but the technology fundamentally cannot resist jailbreaking.
Do you have examples of knowledge that has actually become at risk as a result of this one AI model being added to the pile??
Additionally it has been shown that making models forget things lobotomizes them, so no SOTA model can ever do that and be SOTA. They might be post-trained into pretending not to know, but the technology fundamentally cannot resist jailbreaking.
Do you have examples of knowledge that has actually become at risk as a result of this one AI model being added to the pile??