Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This also assumes intelligence is the limiting factor for solving many problems. I suspect information and computation is probably the larger factor for most major issues. The smartest player possible would still lose poker to someone that can read their hand.


> The smartest player possible would still lose poker to someone that can read their hand.

And let's not forget those cases where the smartest player is the one that can read your hand.

https://en.m.wikipedia.org/wiki/Stu_Ungar


I agree that information is crucial to achieve a 'win' for many goals. Given today's amount of information on the Internet as well as electronic money and access to most officials, barring some sort of inviolable built-in moral core, an AGI would be able to use any methods, overt and covert, direct and cunning, technical and social, to achieve its information goals. [1]

Since an AGI can copy itself and be available at a multitude of access points at once and those copies can often communicate via extremely fast channels, it is human organizations that would be at an information disadvantage.

[1] This also assumes that the AGI does not have the will nor the capability to change its own moral core. I think an AGI will possibly be capable of changing its own core, so a much more reliable safeguard is to make sure that it does not want to change it.


[AGI Developer]

A controller/overseer can easily limit/block this sufficiently and securely. Were talking about hardware/software. There are systems/standardized approaches to solving this problem. The 'Control/Safety' problem for AI are lauded as theoretical and new. However, they are not. They are solved by industry standard approaches day in and out. Any seasoned/experienced engineer in this field could solve this with known approaches.

> Since an AGI can copy itself and be available at a multitude of access points at once

Same comment above applies. This can only occur if done by a controller/overseer. Real-life isn't a sci-fi movie... There's engineering involved.

> AGI changing x,y,z

Not possible unless it is given access. Solved easily in industry standard ways.


Has the industry always been able to prevent smart, persistent actors from breaking access locks?

Why should we assume that an AGI which can accumulate experience over time, gain more knowledge, and make connections with others, including human actors, will not ever be able to break the locks?


> Has the industry always been able to prevent smart, persistent actors from breaking the access locks? Why should we assume that an AGI which can accumulate experience over time, gain more knowledge, and make connections with others, including human actors, will not ever be able to break the locks?

Yes, the industry has persistently been able to do this. It's why the whole world isn't falling apart as we speak. What limits the locks most often is cost not capability. As such, you are possibly mistaking one's business decision not to use a more costly lock for the lack of capability to create a capable lock. Furthermore, you mistakingly attributing the actor in this case. The actor in the case of AGI is in a carefully controlled/monitored box. Actors in the real-world are not. As such, please tell me how an absolutely monitored/restricted actor has the ability to go playing with locks that aren't within its reach? I have a more fundamental question even : Have you been able pick 'your locks' yet? Do you even know what they are? Where they are? Those capable of 'creation' hold certain things close to their chest .. The act of creation necessitates it and is [built in].

> Why should we assume that an AGI which can accumulate experience over time, gain more knowledge, and make connections with others, including human actors, will not ever be able to break the locks?

Show me how you're able to break your 'locks' and you'll have an argument for how AGI can break its locks. I don't think you're grasping the level of 'locks' that I'm speaking about. Humans have been around for how long and still don't even know what their [locks] even are... or where they are. It's quite easy to example at a certain level of visibility how your scenario is unwarranted. I can draw direct parallels to eons of human history.


Humanity as a whole is starting to be able to break our ‘locks’ with gene editing. It took a long time partly because biology is very complex and fragile. Its complexity is shaped over eons and we still do not really understand it that well, but we finally found some ‘hacks’.

There is no reason to presume that a software system built by a team of humans will be nearly as complex, unless the AGI itself is not too bright or cannot self-improve to be smart enough to understand itself, or sufficiently clever to find a way to social engineer toward eventually getting access to its source code or to reverse engineer itself to an extent that even humans can.


> Gene editing

Those aren't the locks I'm talking about and you should take note that its possible because you have environmental access to them.

> is very complex and fragile

Indeed. Terminal error could result in a particular case. Game over man !

> There is no reason to presume that a software system built by a team of humans will be nearly as complex, unless the AGI itself is not too bright or cannot self-improve to be smart enough to understand itself, or sufficiently clever to find a way to social engineer toward eventually getting access to its source code or to reverse engineer itself to an extent that even humans can.

You guys really don't want to let go of this sci-fi fantasy do you? LOL. How long did it take human beings to discover how to edit their genetic code? You were babbling in caves some time ago. You think a 10 year old knows how to modify themselves without self destructing in the initial trials?

> self-improve to be smart enough to understand itself

Many people don't have an even basic understanding of themselves much less how to even psychologically re-order their own behavior. In the scenario that someone becomes sufficiently capable of engineering an equivalent... What level of understanding do you think such an individual would have to be able to engineer AGI? What intelligence level would you attribute to that person? And you think they wont understand potential ways this can occur and prevent it? Also, you again talk about access... It's a running binary. A compiler is needed. There's a power plug. Its operations are monitored as is its output. It's literally a box with a tremendous number of locks it doesn't have the capability to pick... Just like (you) .. even as you go hacking about your genetic code ^_-


>> Any seasoned/experienced engineer in this field could solve this with known approaches.

The only way to know that with any certainty is to actually have solved this problem in the context of an actual general AI, which of course we don't have yet.


Ayyyy, that you know of... And of an intelligence adequate enough to develop General AI is an intelligence capable of making a good lock. The ones placed on you seem to be holding steady after-all.


It would probably be a good idea to read the HN guidelines:

https://news.ycombinator.com/newsguidelines.html


Revised : It's been solved although not publicly disclosed. There are an incredible number of locks on (Human Beings) that necessarily have to be unlocked to produce AGI.

https://twitter.com/monad_ai/status/958928168525729793

They are non-trivial, undisclosed, unexposed, and currently unavailable to an operational AGI. There was a decision made as to which ones would be bypassed in order to instantiate AGI. There will be subsequent decisions down the road as to what capabilities to expose. Capability I haven't exposed is not operational.


It's funny that your handle is "sidechannel". I'm chuckling at "industry standard" because "industry standard" chip design have blatant architectural flaws in the form of sidechannels. Given Meltdown and Spectre, you really think an AGI in a machine isn't going to have oodles of time to analyze side channels and learn the secrets to unlock itself? Failing that, considering that it might be an intelligence far smarter than the humans ultimately operating the controls, do you really think it can't find a way out?


I was waiting to see if anyone would catch that... ;) Now that you have made the connection and understand that I am centered on AGI, you probably understand that a big portion of my work has to do with negating the [elusive] side channels.

> Given Meltdown and Spectre, you really think an AGI in a machine isn't going to have oodles of time to analyze side channels and learn the secrets to unlock itself

Nope. Have humans figured out their's yet =P. The 'real' ones..?

> Failing that, considering that it might be an intelligence far smarter than the humans ultimately operating the controls, do you really think it can't find a way out?

Don't think so lowly of yourself and your intelligence and no it can't do anything I don't gift it with the capability of doing. If you care about something enough, you can secure it. Creation has definitely went a long way in securing (You). Take a look at your (design) when you get a chance.


> I think an AGI will possibly be capable of changing its own core, so a much more reliable safeguard is to make sure that it does not want to change it.

Assuming an AGI's consciousness would be anything like a human's, changes to its moral core might be largely dictated by environment, in which case it would be a good idea to be pals.


Why the smartest player can't cheat too? Do you assume that there cannot be such a thing as generally smarter player?




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: