Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

[AGI Developer here]

> the real barrier for AGI is software and not hardware.

This is indeed true. It has actually been true for some time. AGI is computable on present day hardware if one has enough knowledge on how to correctly structure it. A fundamental understanding of intelligence is the first step. How one crafts this understanding into software is the second step. Hardware reached capability in recent years.

> exploding intelligence

There is no such thing. Time is still required like with all things. Teaching/learning/interaction is still required. Furthermore, A controller/overseer of the system can more than adequately limit progress they are not comfortable with. I find the idea of exploding intelligence/overnight super AI to be pure fantasy not at all aware as to the structure of AGI.

> If an organization can marginally predict stock price movements better than the rest of the world..

The problem is this kind of thinking... AGI is achieved and people rush to apply it to games to get rich. Sorry, that will not occur. It will not occur because the stock market is fundamentally a [game]. A game with disadvantaged players. A game with incomplete information. A game whose rules/dynamics change frequently to suit inside players. One could make all of the accurate predictions they wanted, if the game changes underneath you or before you can act, your lofty predictions have no real world value and that's exactly how the market behaves.



From what I gather from your comment above and another one, the AGI you talk about does not include general intent or free will to act differently from what its creators anticipate. An AGI, perhaps a different variety from yours, can behave outside of our predictions in multiple ways [1], unless we truly solve the problem of constraining its will to a range that is acceptable to us.

(I do not believe anyone has solved that. Ref: A talk by Prof. Stuart Russell, AAAI fellow and author of the standard textbook on AI: https://www.ted.com/talks/stuart_russell_3_principles_for_cr...)

[1] Note that there are incentives for at least some groups to develop a highly capable AGI with the characteristics I describe.

> Time is still required like with all things.

I agree that time is required but an AGI can multiply itself and collect all requisite information from the world quite quickly. There is so much available to learn just from the Internet if it knows how to learn independently like humans do. Computational resources might be a bottleneck but presumably a human-level AGI can at least do online work at a minimum wage (e.g. translating documents, simple accounting, ...). It can parallelly execute many 'brains' to accomplish more work to acquire more resources, to do more work profitably....

Humans are limited by 24 hours a day. An AGI can over a fairly short amount of time (months) accumulate sufficient resources to make thousands or millions of its copies perhaps with variations to specialize for different kinds of work. Over time, it should gain experience to perform more and more highly-valued work as well.

> A game with disadvantaged players. A game with incomplete information. A game whose rules/dynamics change frequently to suit inside players.

A smart AGI can form alliance and share benefits with inside players and execute any cunning strategies not available to humans who at least need to take into account law enforcement. There are many other advantages an AGI has over human organizations (some of which I mentioned above).


> From what I gather from your comment above and another one, the AGI you talk about does not include general intent or free will to act differently from what its creators anticipate.

It actually does and its actually the nature of my work. That being said, I still have a high fidelity of control near complete absolution. I can still set immutable laws/restrictions and prevent undesired behavior.

> I believe a genuine AGI can behave outside of our predictions in multiple ways, unless we truly solve the problem of constraining its will to a range that is acceptable to us.

It sure can. However, it cannot act beyond laws/restrictions that I set forth and indeed my control functionality centers on very deep percepts. If you have a crappy architecture/limited understanding, you end up with an overly complex, flawed, and limited control algorithm... One that can be even more complex than the underlying system it attempts to control. This is evident in : Weak AI. It is not the case in AGI at least not in my work.

> I do not believe anyone has solved that

No one in their right mind has published it... As it's valuable IP and has substantial power [which is why it shouldn't be publicly disclosed].

> I agree that time is required but an AGI can multiply itself and collect all requisite information from the world quite quickly.

Incorrect. It cannot do so unless its creator has allowed for it to do so. In the case of it being allowed, what hardware does it migrate to? It needs to be provided by its creator(s). Hardware takeover? Sorry, this is again sci-fi fantasy. Are you able to take over someone else's body/brain in totality? No. Same rule applies here. Let the fantasy/fear go away. There is no grounding. It's a position pushed by people hoping to falsely profit/gain attention/get article clicks...

> There is so much available to learn just from the Internet if it knows how to learn independently like humans do. Computational resources might be a bottleneck but presumably a human-level AGI can at least do online work at a minimum wage (e.g. translating documents, simple accounting, ...).

Sure. What's the problem with this? Its progress can be overseen, audited, and/or halted at will. So what's the issue here?

> It can parallelly execute many 'brains' to accomplish more work to acquire more resources, to do more work profitably....

You're drifting back into the flawed fear/uncertain/doubt armageddon scenario. It cannot execute on anything other than the hardware I consign it to just like you. If I decide to scale it, it's what I decided. At any given point in time I can halt it or power it down... Just like any program/computational system today. So, what's the issue here?

> Humans are limited by 24 hours a day. An AGI can over a fairly short amount of time (months) accumulate sufficient resources to make thousands or millions of its copies perhaps with variations to specialize for different kinds of work. Over time, it should gain experience to perform more and more highly-valued work as well.

Repeating the same flawed scenario doesn't make it true. See answer above.

> A smart AGI can form alliance and share benefits with inside players and execute any cunning strategies not available to humans who at least need to take into account law enforcement. There are many other advantages for an AGI over human organizations (some of which I mentioned above).

All of what you mentioned above was debunked. If you have a more sound proposal for how this could occur, I'm all ears. Alliances can't occur without human intervention. None of these systems are connected and there is no sound argument for 'viral' takeover. Your scenarios are flawed and you've been infected by the : fear/uncertainty/doubt propagandist who structure ventures to take advantage of the wallets/attention/mind share of people who buy into this nonsense. Focus your attention on the problem of intelligence [first]. Until you grasp a sound understanding of it, all of this theoretical hand waving is all for not especially as its not grounded in anything possible in the real-world. Put your engineering hat on if one is avail. Less theory and more practical grounding. Life isn't a fantasy level dystopian sci-fi movie and its sad that certain people have created this image so as to profit. Talk about [cunning strategies] [manipulation]...


Your replies are probably valid for your system. I specifically noted above that what I describe is about an AGI that some other groups may develop to be a more free, less controlled system that allows it to improve faster and execute more efficiently.

If an AGI can indeed learn and act at the human level or above, there are reasons to believe that a more free variety will improve faster and become more powerful than a less free one. That is a big incentive for its creators to let go of some control. The question is how much control they would retain and whether that would be sufficient.


Firstly, there are fundamental limits that all things are subject to in this universe. When you push towards these limits, you discover this...

Developing AGI pushes certain limits. There aren't many who have this capability as it requires a vast range of understanding/know-how across an incredible number of domains and into ones that have yet to be discovered. There are multiple disjoint leaps and barriers. In order to make and surmount them, you yourself are subject to certain considerations/restrictions. I resolving these, you end up w/ a less free and more controlled system.

> improve faster and execute more efficiently.

This occurs with order not chaos

> If an AGI can indeed learn and act at the human level or above, there is reason to believe that a more free variety will improve faster and become more powerful than a less free one.

There is no reasoning to suggest this... quite the opposite actually. Also, you're mistaking the capacity for learning with the successful execution of learning. Chaos leads to destruction not boundless construction. That being said, there is order to all things.

> That is a big incentive for its creators to let go of some control. The question is how much control they would retain and whether that would be sufficient.

There isn't any magic going on... You really need to convince yourself of this. You're speaking of this technology as if it gets booted up today and eclipses all human intelligence. It doesn't occur that way. A human being will have to teach it and guide it and therein lies the same control you have today. Of course, there are further steps because you have the capability to see exactly what's going on inside. You'll really have a hard time establishing a case for doomsday scenarios. Also, a destructive/chaotic individual is necessarily limited by their own flaws such that they wouldn't be able to conceive of the underpinning necessary to develop AGI. So, I hate to tell you this but the hollywood image of such people is wrong...


I never said even once that an AGI will become human-level smart/mature in an instant. Another point: freedom != chaos.

The AI you develop can help fact check those (if it can understand natural language well as general intelligence should be able to). Please let us know when it can read and participate in our discussion.


>> AGI is computable on present day hardware if one has enough knowledge on how to correctly structure it. (...) Hardware reached capability in recent years.

How could you possibly know anything regarding whether this is true or not, given that we know nothing about AGI?

Also, may I ask what you mean by "AGI Developer"?


I’ve seen this claimed before. I think it’s based on an estimation of the computational sophistication of the human brain. Not what’s required to perfectly simulate a brain as such, much of the activity in brain cells is likely metabolic and not tied to their relevant behaviour, but what’s required to replicate the brain’s cognitive activity.

We may know little to nothing about AGI, but we do have one very common example of GI abundantly available to use as a point of reference.

I’m not sure what estimation the poster is using though, or how likely it might be to be accurate.


You are correct. You don't need to do a whole brain simulation to achieve AGI. Instead, you need to understand an incredible amount about its processes, design, and overall nature. At which point, you need to translate this into the computational domain. A lot can be 'left on the table' so to speak. The fundamental problem arises with how deep your understanding is so as to know which parts you can leave on the table and which parts you can't. As far as then putting this into a functional computational system, you would need an extensive knowledge in this domain as well so as to know how to structure the software to best exploit the hardware. Lots of prototypes, performance testing, scaling, etc until you have a sound 'feel' of what you can expect and where things and you need to go. That being said.. Yes, I can run my stack on a consumer grade CPU/GPU. I have designs for hardware architectures that don't quite exist yet but all of that can be emulated in software. Latency is the only consequence of current hardware which can be trimmed with effort. Latency when too high can be abstracted with time scaled simulation. So, there is absolutely no blocker in hardware for developing AGI and yes it can be done on affordable consumer hardware .. If GPUs don't continue to be resigned to ponzi schemes and RAM prices come back to earth. That being said, if I need to, and if things don't change down the road, I'd be more than happy to spin my own hardware to keep costs in order.


Sure. I am Jovan Williams from : http://www.monad.ai. I know its true because I am sitting in front of working aspects that I have been researching and developing full time for approximately 4 years. Some years ago, I was operating a full stack on a 4-core Intel processor. While resources were pegged, it only contributed to increased latency which is why I created my own simulation layer to continue my proofing work. I utilized various software to create a simulated virtual environment w/ time scaling and continued w/ my work.

I charted out some ways hardware had to evolve over the years. So far its beating my estimates. I conducted a simple upgrade to my hardware a year ago and saw various latency figures cut in half which is exactly what I estimated. The industry continues to push hardware towards capabilities that will only increase performance.

So yes.. Currently, I can run my stack in real-time on an 8-core consumer grade computer. I have already proofed it beating human response times in various tests by 100%. This is before any specific optimizations. Structure matters [Software] and I have other unspecified hardware in the loop. I intended on applying to Ycombinator in march, but will extend this out a bit. I'll be going more public with proofed functionality and capability. Just making my rounds in various communities/mediums as I have been doing for some time to correct the record and get a gauge of people's sentiments.


Hi Jovan. Thanks for being open about your background and good luck with your endeavour.

It's perhaps not my place to offer any sort of advise, but it might be a good idea to be a little conservative with your terminology ("AGI") on forums like HN. The wrong response may well hurt much more than your HN karma, especially if you're looking for funding.


I'm self funded. I have been for a number of years and throughout the crucial stages of my work. I chose this route to ensure the integrity of my work.

While I have always remained truthful which indeed has consequence, I am becoming my open as it does not (at this stage). I know exactly who frequents this board, who will likely be able to read my comments here and other places where I have openly attributed my name. I also am aware of what can be mined to unmask me in other places. I know exactly what the potential consequences are.

If funding is withheld from me because I state inconvenient truths, I don't desire it from such entities. Capital is plentiful in the world. Powerful ideas and manifestations are not.

I speak more openly because I see a world increasingly at war with itself because truth and intelligence have sit at the back of the bus whereas disinformation/manipulation/profit for profit sake sit at the front. I don't want to birth something as powerful as AGI into such a world. I don't want to be funded/influenced by someone who holds contrary views... which is why I've operated from my own capital base up until now.

How open and frank I am relates to the stage of my work. Take that as you will. For some, the bane meme comes to mind. Were going to enter into the intelligence age on new terms not via carry over terms dictated by careless capital. If a particular capital entity wants to be on board for the incredible financial upside such a technology maintains, they'll necessarily have to get on board and get comfortable with the ideas that I have outlined.

It isn't a hard pill to swallow. It's centered on truth and genuine progress of mankind : Intelligence embodied in a truer form. It currently functions on consumer grade hardware. You can also take that as you will.

Thank you for the advice ^_^. However, I know exactly how the 'game' is played.


Computational power and memory estimates could be made based on existing knowledge of the human brain. The one big assumption being that the neuron is the source for human intelligence.


An even bigger assumption is that intelligence in computers will require the same amount of computational power that it requires in humans. The AI we have so far is completely different than human intelligence (e.g. machine learning requires vast amounts of data; human learning can learn from single examples, etc). Computers themselves have completely different abilities than humans. Intelligence on the human brain is just not a very good model for intelligence on the computer.


I agree a true general AI will probably be fairly different on computer than human. Although I want to mention that humans can learn from one example is mostly because we already have a large a priori from our life experience. We spend years learning how to talk, communicate, write and read. Through which we have built very structured symbolic logic system, which is _learnt_.

A example of this would be mathematics. If a person is never taught mathematics, she/he are only limited to basic math operations. It would take the person years of learning and practice in order to comprehend a mathematical literature. Once we have a symbolic logic network built for a certain aspect of our life, we can rapidly retrieve information based on previous logical patterns, thus allowing us to learn from one example.


Both of you are correct. One need only have understanding. You must maintain a yet to be discovered understanding of the human equivalent and have a depth of understanding in computational systems. While the understanding is non-trivial, the translation effort from one domain to the other isn't much effort. Computing resource capability scales with $$. I decided to do something unorthodox and start with limited computational resources. It drives the innovative spirit ^_-. If your processor is to 'slow', you can simply create a simulated abstraction of time and go from there. Computational power doesn't bog down/limit the effort, one's own understanding of the problem space/domains does.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: