Hacker Newsnew | past | comments | ask | show | jobs | submit | adrians1's commentslogin

I don't get what I should be angry about here? That Bill Gates wrote something when he was 20? 20 year olds say stupid things all the time. Think about you when you were 20.


Given that, according to the ban proponents own words, social media algorithms are addictive as hell and impossible to resist, what do they think it will happen to those children after 16?

Do you really believe that they will magically be "immune"? Even adults are addicted to social media, and it didn't even exist back when they were teens.


AI researchers are just as prone as everyone else to be self-delusional.


This is what people don't appreciate when quoting those statistics about how self-driving cars are safer than humans: when a human driver causes an accident, it was because that particular person did something wrong. When a self-driving car handles a situation wrongly that's a big issue, because all the self-driving cars run the same software.


On the other hand when a human driver causes an accident one driver learns a lesson (maybe). When a self driving car causes an accident all cars get to learn from it.


Yes, but when the bug was fixed, it was fixed everywhere.


That phrasing makes it easy to underestimate the astronomical dimensions of edgecase-space, and what necessarily remains unexplored after such a fix.


how do we know for sure that it was fixed ? What are the conditions that are fixed ?


Since the comment thread is comparing humans, how do you know the human has fixed their bug?


Humans don't have bugs. they may have disease, or mental troubles, but we're pretty good at assessing them. Thanks in part to their ability to communicate.


I'd argue that humans don't have bugs but their mental models of things do, mostly as a consequence of the fact a model's complexity increases with it's accuracy.


> Remote attacks work by first exploiting an unpatched vulnerability in a browser, media player, or other app and using the administrative control gained to replace the legitimate logo image processed early in the boot process with an identical-looking one that exploits a parser flaw.

I don't get the point of this. Any vulnerability that requires local access can be exploited if you first get remote code execution through another vulnerability. Also, exploiting the browser or the media player doesn't give you admin privileges, you need another privilege escalation exploit for that.


What make this vulnerability frightening is

- the persistence that’s nearly perfect

- an av cannot detect it ever

- it bypasses all forms of secure boot by getting code exec at the earliest of stages in the boot chain of trust

- the disassemblies show that the bios vendors did not even remotely try to make the parser secure. it is a joke. and if an image parser is that bad, I can’t even imagine the quality of usb or network stacks


But this makes the access persistent, and allows the removal of all evidence of the initial penetration, survives OS patching, vulnerability scanning, etc.


Agree, it's a serious vulnerability, but it's not exploitable remotely.


Not true, most AI doomers are actually worried about Skynet.


> But in truth, OpenAI was already a capitalist/commercial enterprise and has been for over a year and those concerned had already failed in keeping the beast contained. The cat is out of the bag, and trying to keep ChatGPT in the box wasn't going to do anything in preventing LLMs to continue to proliferate.

Exactly. The author writes as if OpenAI is the only company that does AI, where in fact many companies train AI models, many will do it in the future, and other countries will do it as well. Once LLMs have been invented you can't uninvent them.


> Most of the ai safety arguments revolve around “once it’s powerful enough to cause damage it’ll be too late to come up with a strategy”

I don't see any reason to accept this argument. The AI safety people should also prove their assertions, not expect us to take them at face value.


How do you prove a counterfactual?


Even if AI is a threat, it's impossible to predict today how the threat is going to look in the future. The "AI safety" crowd watched too many Terminator movies.

We'll deal with AI the same way humans dealt with any other technology: gradually, as it gets better and better, discovering at each step what it's capable of.


Right just like dinosaurs dealt with gradual meteors igniting the atmosphere.


The technology was already developed with Microsoft money and the model was exclusively licensed to Microsoft.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: