I don't get what I should be angry about here? That Bill Gates wrote something when he was 20? 20 year olds say stupid things all the time. Think about you when you were 20.
Given that, according to the ban proponents own words, social media algorithms are addictive as hell and impossible to resist, what do they think it will happen to those children after 16?
Do you really believe that they will magically be "immune"? Even adults are addicted to social media, and it didn't even exist back when they were teens.
This is what people don't appreciate when quoting those statistics about how self-driving cars are safer than humans: when a human driver causes an accident, it was because that particular person did something wrong. When a self-driving car handles a situation wrongly that's a big issue, because all the self-driving cars run the same software.
On the other hand when a human driver causes an accident one driver learns a lesson (maybe). When a self driving car causes an accident all cars get to learn from it.
Humans don't have bugs. they may have disease, or mental troubles, but we're pretty good at assessing them. Thanks in part to their ability to communicate.
I'd argue that humans don't have bugs but their mental models of things do, mostly as a consequence of the fact a model's complexity increases with it's accuracy.
> Remote attacks work by first exploiting an unpatched vulnerability in a browser, media player, or other app and using the administrative control gained to replace the legitimate logo image processed early in the boot process with an identical-looking one that exploits a parser flaw.
I don't get the point of this. Any vulnerability that requires local access can be exploited if you first get remote code execution through another vulnerability. Also, exploiting the browser or the media player doesn't give you admin privileges, you need another privilege escalation exploit for that.
- it bypasses all forms of secure boot by getting code exec at the earliest of stages in the boot chain of trust
- the disassemblies show that the bios vendors did not even remotely try to make the parser secure. it is a joke. and if an image parser is that bad, I can’t even imagine the quality of usb or network stacks
But this makes the access persistent, and allows the removal of all evidence of the initial penetration, survives OS patching, vulnerability scanning, etc.
> But in truth, OpenAI was already a capitalist/commercial enterprise and has been for over a year and those concerned had already failed in keeping the beast contained. The cat is out of the bag, and trying to keep ChatGPT in the box wasn't going to do anything in preventing LLMs to continue to proliferate.
Exactly. The author writes as if OpenAI is the only company that does AI, where in fact many companies train AI models, many will do it in the future, and other countries will do it as well. Once LLMs have been invented you can't uninvent them.
Even if AI is a threat, it's impossible to predict today how the threat is going to look in the future. The "AI safety" crowd watched too many Terminator movies.
We'll deal with AI the same way humans dealt with any other technology: gradually, as it gets better and better, discovering at each step what it's capable of.