About AlphaZero particularly, a few things must be kept in mind.
First, AlphaZero still makes use of a Monte Carlo Tree Search algorithm to
search for good moves. MCTS is a powerful algorithm with a very limited scope:
zero-sum, perfect information games. So for instance, it would be very
difficult to see how to use MCTS-based AlphaZero in, e.g., training
self-driving cars.
Second, the AlphaZero architecture is precisely mapped onto a checkerboard and
will not learn anything about games that don't use a checkerboard, or any
situation that is not possible to model as a game played on a checkerboard.
Third, the AlphaZero architecture is also precisely mapped onto the range of
moves of pieces in chess, shoggi and go. Again, AlphaZero would be useless in
any game that used pieces with different moves (e.g. a piece with a zig-zag
move, or a piece allowed to move in spirals etc).
All of the above of course can be mitigated with different architectural
choices, but to make those choices, implement them and validate them will take
a great deal of time.
So, AlphaZero doesn't mean we're closer to _general_ AI.
Quite the contrary: it's a very specialised form of AI that will be very
difficult to use in any different task than chess, shoggi or go.
So, AlphaZero doesn't mean we're closer to _general_ AI. Quite the contrary: it's a very specialised form of AI that will be very difficult to use in any different task than chess, shoggi or go.
This is a very true statement and one that I think a lot of people who aren't in ML/DL, but are "worried" about AGI, miss.
There is however a common thread with everyone in AI, that they tend to think of AGI as "One algorithm to rule them all."
As a practitioner and AGI researcher however I think that AGI is more of a system of specialized or narrow AI tasks that can together solve all tasks. At the risk of oversimplifying and anthropomorphizing, this type of problem solving is functionally how we do it as humans.
So having a corpus of solved narrow systems (discrete known rule space in the sense of AlphaGo etc...) that is "activated" by an executive function which can recognize the problem set and then pass subsets of a larger problem to the narrow solutions. Those solutions are then "backpropagated" and synthesized into the general problem solution.
In that sense, I would argue that narrow solutions like AlphaGo etc... do get us closer to General AI because they grow the corpus of solution paths for the general problems.
I think you are only thinking of supervised learning's capabilities, which I assume is the field of your AGI research? I'm working with reinforcement learning and DRL research, and RL was born to address the short coming of supervised learning. DeepMind is arguably the forerunner in RL right now, and AlphaZero is the crystalline of their RL research.
Yes, AlphaGo uses NN and other SL techniques, but the core is very much DQN based RL. No amount of SL can effectively play go and invent new moves. RL can already solve a large number of real world problems with a rather simple algorithm, from self driving cars to video games to NLP. RL can tackle all those problems with pretty much the same core algorithm. The question lies less in IF RL can solve more general AI problems, but rather HOW to solve it. From a high level view, we are having a lot of trouble with its convergence properties mathematically and its extreme sample inefficiency. This is the reason why Boston Dynamics doesn't use much RL, Waymo doesn't use much RL, simply because they can do much better with current techniques without going RL.
AlphaGo is still a major step forward regardless, because it's one of the biggest leap in RL we've taken in the recent years. It suddenly lets RL stably converge on solutions more than we could ever before. AlphaGo's contribution is more than just that it built a specialized Go bot, but rather a much more stable RL algorithm that lets us approximate non linear functions (majority of the real world applications are non linear). If I were to put my money, AI could very well be entering a new era with AlphaGo and their DQN.
I disagree. The naming convention [Artificial Intelligence] still is a large shoe that these purpose built applied engineering solutions have yet to fill. Meanwhile, for profit/notoriety/marketing people want to trample on yet another name space? What you just described is essentially the architecture of a self-driving car. It's yet another applied engineering solution of Artificial Intelligence. It is not Artificial General Intelligence. Scaling/Distributing the computational space of an applied Artificial Intelligence solution is not Artificial General Intelligence. This is the same thing that lead to optimization algorithms being called Artificial Intelligence. If you aren't able to maintain foundational distinguishment, you lose track of what you're searching for and trying to achieve. Outwardly, you capture more money and attention. Inwardly, you become unraveled and lose your capability to solve the elusive problem. Eventually after much fame, wealth, and feigned 'success', one asks themselves : Was it worth it? Depends on what your original aim was.
I'm not sure what you're arguing but it seems like my key point wasn't communicated well.
What you just described is essentially the architecture of a self-driving car.
Yes, every narrow AI is a system of systems to an extent. So expand on that concept but outside of a single firm/system. Such that the self driving car system is one single solution path solving "transportation" which would comprise automated flight/rail etc... and is a node in a larger general system - like hub and spoke.
The naming convention [Artificial Intelligence] still is a large shoe that these purpose built applied engineering solutions have yet to fill.
Nobody is questioning that. The size of the narrow AI market is arguably infinite.
You seem to be arguing that a single entity will fail if it attempts to take a narrow AI system and make it generalizable. Of which I am in agreement with.
If however there were 10,000 or 100,000 or 1,000,000 narrow AI companies/systems (like a self driving car system or alphago etc...) those could fill the corpus of solutions which an executive function system could utilize depending on the application and together they would be what we call AGI.
> I'm not sure what you're arguing but it seems like my key point wasn't communicated well.
It was and quite well. Were speaking the same language. We just have different conclusions.
> Yes, every narrow AI is a system of systems to an extent. So expand on that concept but outside of a single firm/system. Such that the self driving car system is one single solution path solving "transportation" which would comprise automated flight/rail etc... and is a node in a larger general system - like hub and spoke.
And you still have nothing more than a hub and spoke system of systems authored for specific problems spaces and you're spokes will increase with every new problem space until you overwhelm your hub. A horrible architectural approach that if not caught in the initial stages will result in catastrophe down the road... Weak AI is weak AI no matter how you scale it.
> You seem to be arguing that a single entity will fail if it attempts to take a narrow AI system and make it generalizable. Of which I am in agreement with.
This is a start in the right direction...
> If however there were 10,000 or 100,000 or 1,000,000 narrow AI companies/systems (like a self driving car system or alphago etc...) those could fill the corpus of solutions which an executive function system could utilize depending on the application and together they would be what we call AGI.
No, its strung together weak AI. It will require significantly and unreasonable amounts of resources. Its capability will increasingly reach diminishing returns and you'll end up with a frankenstein monster code base that no one can manage or understand.. Sounds a lot like the path Weak AI is already heading down.. At such a point, it's best to just scrap it and start all over. Something that Hinton and other prominent figures are finally admitting. Something I concluded year ago which lead me down a different path. Now, you're more than welcome to state : Well hey man that's your opinion and you're wrong and I'll wish the 10s,100s, million of narrow AI companies the best just as was conveyed to me a umber of years ago. Weak AI is Weak AI. It is a class of optimization algorithms. You can jerry rig this all you want.. You still have nothing more than a system of systems of optimization algos. If you think this is what intelligence is, I'm not sure what to say.
You still have nothing more than a system of systems of optimization algos. If you think this is what intelligence is, I'm not sure what to say.
Until someone comes up with a better definition of intelligence that's what I'm sticking with. I think you're looking for an elegant solution right out of the box - the "one algorithm to rule them all" and I don't think that is feasible from an engineering perspective if for no other reason than no singular system has anything near the data collection nodes needed for specificity on the range of tasks that would suffice any definition of "General."
Having raised three other humans and observing them while building DL systems myself for a living, I feel more strongly everyday that human intelligence is a hodgepodge of "weak AI" systems glued together with an exceptionally efficient executive function. AGI is as much a community building and humanity wide input collection challenge as it is a math problem. We need to think about it that way.
I feel more strongly everyday that human intelligence is a hodgepodge of "weak AI" systems glued together with an exceptionally efficient executive function.
I’m in complete agreement with this. I try to avoid AGI discussions because people get upset when I argue that the vast majority of human “intelligence” seems to be strong pattern matching, and we can’t really define the parts that aren’t in any useful way.
Take the person you are discussing this with. The majority of their point seems to be a hang-up on the word “intelligence”.
I find that a pointless thing to argue over. Just agree and say it is an intelligence simulator which is indistinguishable from a real intelligence.
> Until someone comes up with a better definition of intelligence that's what I'm sticking with.
You'll get a capability demo instead. It wont fail to impress. Definitions and designs are for another day.
> I think you're looking for an elegant solution right out of the box - the "one algorithm to rule them all" and I don't think that is feasible from an engineering perspective if for no other reason than no singular system has anything near the data collection nodes needed for specificity on the range of tasks that would suffice any definition of "General."
What else is one looking for who claims they're trying to solve the Intelligence problem? Marketing an optimization algorithm as the next coming might make you rich in the short term but it doesn't bring you closer to the truth. It does in fact take your further away. So, 'the elegant solution'/'the hard problem' was the only thing I set out to tackle some years ago. Otherwise, i'd have been wasting my time/not being truthful with myself. It's feasible from a research and engineering perspective. Few commit themselves to the TRUE task and the likelihood of failure. I was ok with that it and stuck with it. I self-funded my work. It mainly centered on research. Thus, there were no exits. I either saw it through and achieve it or I didn't.
As far as :
> no singular system has anything near the data collection nodes needed for specificity on the range of tasks that would suffice any definition of "General."
Sure it does. Look in the mirror and log onto the web. I've let the misses play online for a bit now ;).
> Having raised three other humans and observing them while building DL systems myself for a living, I feel more strongly everyday that human intelligence is a hodgepodge of "weak AI" systems glued together with an exceptionally efficient executive function. AGI is as much a community building and humanity wide input collection challenge as it is a math problem. We need to think about it that way.
My graduate work centered on the underpinnings of DL (Distributed Optimization). After years of industry experience, I searched for a new challenge. After some open ended research in physics/photonics, I came to Artificial Intelligence. I scratched my head for 3-4 months as to why (Distributed Optimization) was being called Artificial Intelligence. I took the broad lot of it and threw it in the trash as prominent figures are only now stating :
https://www.axios.com/artificial-intelligence-pioneer-says-w...
You're thinking about AGI as if its a chain of DL systems because that's what's made you money and where your work has centered on over the years. I took the broad majority and trashed it as Hinton now indicated others should do and started from scratch. I have no such bias. However, as my graduate work centered on the fundamental underpinnings of statistical optimization / distributed optimization, I know exactly what its limits are.
The human race is far more than a hodgepodge of optimization algos w/ an executive function (whatever that might be given the clearly varied forms of it).
Apologies for the off-topic question but I am simply too tempted not to ask them, I admit:
How does one make a living out of being an AGI researcher? How does one pay mortgage and has money and freedom to go on impulsive vacations with their wife?
How does one make a living out of being an AGI researcher? How does one pay mortgage and has money and freedom to go on impulsive vacations with their wife?
Well you don't, unless you work for OpenAI (which I don't). Not sure how OpenAI does it or how well they pay but I'm sure it's good. They have good donors.
And what does an AGI practitioner even mean?
To clarify, I'm a ML practitioner. AGI practitioner means nothing
I don't mean to sound demeaning but yes, IMO "AGI practitioner" indeed means nothing (and I am saying that as a guy who aspires to work on AGI). Many of us programmers have interesting ideas in the area of AGI but quite frankly, we can "practice" them our whole lives without achieving squat.
The point of soar was/is to provide a framework for a collection of problem solvers with diverse methods to work together in a single agent. This was developed because of the insight that humans use diverse strategies to negotiate the challenges of their environment. Or... Lots of weak Ai adding up fyi general Ai.
Those vectors are only used to generate the move trees though. That part of the architecture is common to pretty much all MCTS board game AIs ever. The value in AlphaZero is in the neural nets used for the expert policy and the value functions and those don’t have anything about the game rules encoded into them at all.
I agree it’s probably quite constrained in the range of possible applications. Everyone was expecting Deep Blue to revolutionise AI applications too. I know the tech is different, but the fact it seems optimised for a highly constrained problem domain isn’t, and in fact arguably the problem domain addressed by deep blue seemed for a long time to be much more general.
How adaptable is AlphaZero to arbitrarily multidimensional grids though?
> Everyone was expecting Deep Blue to revolutionise AI applications too.
That doesn't match my memory at all. The reaction then was dominated by the likes of "this is super-narrow, not real intelligence". (The 80s did have a lot of hyped expectations of related tech, it's true, but that was around 10-15 years earlier.)
>> The value in AlphaZero is in the neural nets used for the expert policy and the value functions and those don’t have anything about the game rules encoded into them at all.
They do, in the form of their inputs that are basically vector representations of a checkerboard. It's obvious that the two (types of) networks can learn something useful from that particular representation of a problem. But- other representations, of different problems? That is not obvious.
I don't agree with this sentiment, although I agree that AI is not nearly at the level of the hype that pop culture makes it out to be. AlphaZero is still a significant contribution to 'AGI' that shouldn't be buried.
It's true that AlphaZero's knowledge is unable to be generalized for other systems, but its biggest contribution is a _stable_ RL system that can solve problems that no other systems can. This is the first piece of the puzzle of more general AI. Generalization, I would consider as the second piece of the puzzle to more general AI. Generalization may be achieved with potential research in transfer learning, model based RL, symbolic network. But without a stable RL algorithm such as DQN as foundation, generalization has nothing to stand on.
Having not defined what intelligence is and having not declared the nature of General Intelligence, you're sure an aspect of clearly defined weak AI is a significant contribution to 'AGI'... Interesting.
> It's true that AlphaZero's knowledge is unable to be generalized for other systems
Interesting admission.
> This is the first piece of the puzzle of more general AI.
The first piece is generalized intelligence. Architecturally, it looks nothing like Alpha Zero. However, you feel :
> Generalization, I would consider as the second piece of the puzzle to more general AI.
How is that the second piece? It's the piece.
> Generalization may be achieved with potential research in transfer learning, model based RL, symbolic network. But without a stable RL algorithm such as DQN as foundation, generalization has nothing to stand on.
So you're of the belief that current approaches are compatible with and are the underpinning of Artificial General Intelligence while Hinton is convinced one needs to scrap it and start over. Sound advice is being ignored and there is a clearly entrenched decision to continue pushing along w/ iterating weak AI. I came here to test the waters.. The commentary and the K-value feedback I've received so far informs me quite profitably.
Hinton's criticism is very valid, but it's not quite about AlphaGo and its branch of ML. his criticism revolves around supervised learning and back prop that cannot be used to achieve the so called AGI. Because ANN is nothing like our brain's real NN. When Hinton gave the speech back in 2014, NN had a huge explosion of hype, and NN was mostly for supervised learning, which is really only good at classification and regression problems, it cannot make decisions outside of its training.
The famous DeepMind DQN paper (the core of AlphaGo) were published after Hinton's talk. the DQN paper practically opened a new chapter in reinforcement learning field. I am not sure if you are familiar reinforcement learning. RL is learning by trial and error, model-less and largely non-bayesian, similar to how humans learn. Up until AlphaGo, RL field was stuck in a limbo because it was having a very hard time learning non-linear problems (which is the majority of problems in nature)
When I say generalization, I meant generalization of knowledge. Generalization is the second piece of the puzzle because, even as humans, we learn from experienc. After enough examples we began to generalize. Up until DQN came out, we couldn't even effectively learn. It's the equivalent of a human baby with severe memory problem. With deepmind's DQN, we can achieve much more stable learning on non-linear systems, and we can begin to add components such as generalization (such as transfer learning), intuitions (such as intuitive physics), symbolic network.
I am not too sure what you meant by weak AI and general AI. For me, an AI which can learn similar to how humans learn, able to use apply generalized knowledge when facing a brand new problem, independently think and make decisions without human assistance, that's general enough.
Yes, much work needs to be done, but I don't believe this is the wrong direction we are going. Though I'd be glad to be proven wrong and I am very fascinated by this debate, if you would like, we could continue discussing it over email/chat?
I would just like to echo this comment as it exactly and indefensibly strikes out Weak-AI from Artificial General Intelligence. I fail to see how specially crafted optimization algorithms continue to receive higher attribution. First, they were renamed to Artificial Intelligence. A great deal of cash exchanged hands based on this attribution. To retain the spirit of the true definition of Artificial Intelligence, the naming convention : Artificial General Intelligence was crafted. Even then, the allure of cash/fame progresses to taint that whereby people are calling expositions of Optimization/search algorithms Artificial General Intelligence. It saddens me that mankind has progressed the ubiquitous and beautiful information age into the disinformation for profit age from top to bottom and bottom to top.
It's highly disingenuous to portray
the researchers in machine learning as some sort of sell outs . Hinton was working in neural nets even in the time when they were uncool and fringe area of research. They didn't sell out AGI for some profit motive .
The thing is we don't have any clear path laid to follow to achieve AGI . Various paths proposed earlier turned out over optimistic and dead end.
The current "weak AI" you disparage is an achievement expanding generalization of problem spaces using unified methods way better than past systems.
So even if our current progress seems disappointing , please examine what's happened and why research has taken the current path before tarnishing researchers with unjustified quips.
I don't recall portraying anyone in that light. I highlighted the nature of an industry. Who aligns themselves to this nature was left unstated. If this clear and present truth offends, its likely because it has merit and is validated by the level of offense one encounters. It quite clearly cannot be defended. If you believe what I have stated can be, you're more than welcome to produce sound arguments that try. We can walk through an incredible amount of examples of what I have stated together.
> Hinton was working in neural nets even in the time when they were uncool and fringe area of research. They didn't sell out AGI for some profit motive .
Listen to what you're saying....
> neural nets even in the time when they were uncool and fringe area of research.
And yet, it has made others billions and continues to mint money. Whose centered on the fundamental problem as opposed to conducting applied engineering for profit? Whose over marketing themselves and their efforts as fundamental theoretical research when its more or less optimizations for applied engineering? What ideation is new and what is simply relabeling old pioneer's work as one's own? Who minted LSTM? Whose name remains all over it? Who made a mockery of a prominent contributor? Who no longer discloses the details of their work given the proven nature of the industry? If the critique doesn't apply let it fly.
Whose taking this advice? Whose funding people thinking outside of the box? Whose hiring someone whose thinking outside of the box? So, whose really and whole-fully centered on solving AGI for the sake of solving it? It's fun to market yourself as doing so for increase prominence/money. It's a whole other ball game to be internally oriented and structured in pursuit of it. Applied engineering/optimizations of Weak AI for business applications is not AGI research.
> The thing is we don't have any clear path laid to follow to achieve AGI
The path was always there. One only need pursue it for pursuits sake. No exits. No distractions. No business case. No payouts. A desk pen/paper.
> Various paths proposed earlier turned out over optimistic and dead end.
False. Various paths proposed earlier are the same ones being turned into profitable solutions today. They are the same ones underlying the bulk of white papers today. There weren't over optimistic. They weren't dead ends which is why various groups are using them to suck down billions of dollars today. What happened that caused the previous AI winter is that people rushed fundamental research into applied engineering. The same as what's happening today and will lead to a 'winter' for various groups who are deep in it. Statistics overshadowed fundamental mathematics and understanding. Brute force over-took intelligence... And yes that leads to dead-ends .. clearly. Yet, here we are again. A bunch of statistical brute force machines being over-marketed for profit leading fundamental research into a dead end. Google has spoke out about it, Hinton has, Microsoft has, yan lecun to echo my sentiments. Yet, the allure of profit/notoriety continues to attract the same thinking/soon to be failed approaches to something that is fundamentally beyond current work.
> The current "weak AI" you disparage is an achievement expanding generalization of problem spaces using unified methods way better than past systems.
It's distributed statistical optimization. There's no need to fluff me. I am centered on it. It works via large data sets, limited and specific state spaces, and large amounts of computational resources running tons of iterative steps. It is brittle and not general. It works today because we have enough computing resources such that we can brute force various problems. Problems that are privy to statistical patterning.
> So even if our current progress seems disappointing
I never stated I was disappointed nor did I disparage anyone. I stated more clearly the nature of the progress, the goals and driving forces, and the fast approaching limitations.
This frees one to respectfully acknowledge and consider what's been done and move forward and beyond it to more capable pastures.
> please examine what's happened and why research has taken the current path before tarnishing researchers with unjustified quips.
Please reduce your sensitivity level so that you can reason yourself to higher planes of consideration. Less emotion and more reasoned/truthful admission. I know exactly what happened to research in a time's past which informed me as to how to conduct myself in the present. I know exactly what driving forces are and I know quite clearly as we all do as to how they influence people. Hinton as well as others now public agree. Microsoft/Google have stated much of the work is overhyped. Hinton says pioneers should scrap everything and start over. LeCun stated that deep learning is approaching a brick wall and overhyped. It's exactly what I said to myself years ago and am stating now and there's zero problems with stating this truth as so many have no brought themselves to state.
I want to preface that I know no more about AI than the average technologist, so I'm not making any claims.
>> Various paths proposed earlier turned out over optimistic and dead end.
> False. Various paths proposed earlier are the same ones being turned into profitable solutions today. They are the same ones underlying the bulk of white papers today. There weren't over optimistic. They weren't dead ends which is why various groups are using them to suck down billions of dollars today. What happened that caused the previous AI winter is that people rushed fundamental research into applied engineering.
How do you fit full brain emulation in that description? And in particular Henry Markram's work with Blue Brain and Human Brain? As far as I know it doesn't have any profitable outcomes, and people have been working at it for decades - and now even with pledge funding by the EU and others to the tune of a billion euros - yet nine years after Markram said we could have a functional human brain in ten years, nothing much seems to have emerged.
There's a vector encoding "queen moves" and another encoding "knight moves". Between them, they cover all possible chess moves. Knight moves obviously are modelled by "knight moves".
We're all already part of a society-scale, distributed hive mind--have been since the invention of language. Birds flock together, they eat together, they think together. Families, friend networks, cities, global societies, they all form communication topologies that have analogs in the brain. Thoughts bounce from person to person, memes spread amongst the computational fabric of groups of people. It's nothing new.
We as a society have formed networks and systems to solve problems like finding energy, producing food, and organizing economic output. We live in a distributed intelligent system that disseminates knowledge and programs our preferences, responses. We're all utilized as work units, and economics does that.
I think it's a mistake to think of AGI as something separate from us, something that doesn't already exist. We're more like a cybernetic superorganism. We've put so much computational power in charge of the choices that we make, and we rely so often on recommendations from computational systems, that if you zoom out far enough, it becomes clear that we are part of a huge, cybernetic Overmind.
The Overmind is just moving more computation away from humans because hey, they're slow. It doesn't really speak to us. Do you speak to your neurons? Nevertheless, it has its goals, its resources, its needs, its preferences. People carry out its wishes, statistically. It turns out that its wishes align very well with economics: More computers! More network! More screens! Connect all the stuff! All the companies doing this are making huge dollars. The mind or minds are just centralizing now, and economics drives that. Computers already fully run the stock market. They run shipping and logistics. They are used to optimize all kinds of economic outputs. And they are used (by humans) to design better computers.
At the broadest scale, we are already that self-improving intelligent system, it just doesn't look like it from meatspace just yet.
It's kind of irrelevant if it could utter the words "I think for I am." Who would it tell that anyway?
The "hivemind" argument seems to predict that as society scales up (either through massive population growth or through faster and better interconnectedness, such as through the internet) that as a result we should be seeing much faster gains in technological progress especially in the last few decades or so. However there are quite a few observations that a lot of this progress has sort of slowed down compared to the early 20th century (see the arguments for "technological stagnation"). At the very least, technological progress hasn't increased linearly with population growth and better communication. In other words, the rate of technological progress looks more discontinuous and not obviously a function of societal coherence.
I'd argue that what we're seeing is both a centralization of computational infrastructure as well as a massive infiltration of everyday life with digital (computational) technology, often for no good reason other than it's a way to make money. We certainly have not stagnated w.r.t. to hardware or software, but just keep trying to scale to the stars. I'm cynical in that I believe the "AI first" movement is basically offloading the next phase of computational progress on computers themselves, because we either too tired or ran out of ideas or can't keep up, or it's necessary as a competitive business strategy, or it's simply cheaper.
Technology keeps giving people nice gizmos and plenty of flashy entertainment, but it's mostly self-serving, technology begetting yet more technology without clear purpose, as evidenced by stupid shit like Bluetooth toothbrushes (https://www.lookfantastic.de/oral-b-pro4000-x-action-toothbr...) and the insane number of cryptocurrency Ponzi schemes. Toothbrushes that need a freaking network connection and bits worth absolutely nothing. The crescendo of our civilization! Well done. If there's an overmind, it's feeding us shit.
I don't think your interpretation of that prediction is accurate. Rather it would be that there are "bursts" of technological progress followed by slower or no gains while the world "catches up." That seems to more accurately follow the history of technological progress.
I think a better interpretation would be that those "bursts" happen at tighter intervals, and if you look at the course of history that seems to be the case.
For example, the period between the introduction of horses/plows widely adopted in agriculture in the 1700s and the wide adoption of internal combustion in the 1940s was ~300 years. From Internal Combustion to wide adoption of transistors (1970s) was about 30 years from transistors, transistors to internet about 20 years, internet to ? (Deep Learning 2012) looks like about 15 years.
Not sure if that's a perfect fit but I think it represents a pretty compelling case.
I think that bursts of technological progress follows more from the model of individualized intelligence, whereas continuous progress follows from the distributed, networked model of intelligence.
A promoter of the distributed model of intelligence might argue that Einstein was only able to produce the general theory of relativity because of the knowledge already contained within society, such as the mathematics and physics that had already been built up to that time. All the stuff from Euclid to Newton to Gauss to Poincare and Minkowski that Einstein's work relied upon.
Does that imply that Einstein wasn't really smart? If you narrow your focus to just the innovation Einstein made, where did that come from? Did it come from the "hivemind" or was Einstein himself doing something special that allowed him to develop the insight?
More individualized intelligence would predict that we would see smaller intervals between bursts as society increases in size and connectedness (more chances for Einsteins to appear, more likelihood that they can work together). But if intelligence is somehow an emergent process from the network of all humans itself, then as society grows we shouldn't see many bursts at all, just a fairly continuous increase in knowledge as little bits and pieces get absorbed and distributed.
I think you're making too many assumptions about the inner workings of "the brain".
If we look at actual brains - including Einstein's - are they not bursty? Don't people have periods of greater intellectual output with lulls in between? Seems to match pretty well.
You are right, and it's not even a new idea. It's the premise of Douglas Adams' H2G2 books, where the Earth was literally built as a supercomputer by another supercomputer, named Deep Thought, after it finished computing the "Answer to The Ultimate Question of Life, the Universe, and Everything" and realized that the answer was meaningless without the right question and that it would need a much more powerful computer to find it.
We know that for most problems, a group of people tends to be better at problem solving than an individual [1]. Even if AI technology only reaches human-level and does not exceed it, continually increasing efficiency would make AI smarter than any small groups of humans and immense bandwidth relative to human communications would make it more effective than any large human organizations.
In addition, if AI posesses sufficient computing resources, which will certainly become available in the next few decades if not already, it will have inherent strong advantages like serial computation speed and memory size that exceed any human brains. So the real barrier for AGI is software and not hardware.
If AGI software is developed before we have sufficient hardware to run it at the human brain speed, then it will become more capable at the rate of hardware we can put into use, which is likely exponential given how parallelized the human brain appears to be.
The major counterargument I find most convincing regarding outsized impact of exploding intelligence is that many problems are exponentially hard (or harder than that) and thus exponential intelligence can only make linear or sublinear progress on them.
However, linear or even sublinear progress may still lead to quite drastic changes in the world. If an organization can marginally predict stock price movements better than the rest of the world, in a few decades it will accumulate great resources and power. The same is true for many other important domains.
> the real barrier for AGI is software and not hardware.
This is indeed true. It has actually been true for some time. AGI is computable on present day hardware if one has enough knowledge on how to correctly structure it. A fundamental understanding of intelligence is the first step. How one crafts this understanding into software is the second step. Hardware reached capability in recent years.
> exploding intelligence
There is no such thing. Time is still required like with all things. Teaching/learning/interaction is still required. Furthermore, A controller/overseer of the system can more than adequately limit progress they are not comfortable with. I find the idea of exploding intelligence/overnight super AI to be pure fantasy not at all aware as to the structure of AGI.
> If an organization can marginally predict stock price movements better than the rest of the world..
The problem is this kind of thinking... AGI is achieved and people rush to apply it to games to get rich. Sorry, that will not occur. It will not occur because the stock market is fundamentally a [game]. A game with disadvantaged players. A game with incomplete information. A game whose rules/dynamics change frequently to suit inside players. One could make all of the accurate predictions they wanted, if the game changes underneath you or before you can act, your lofty predictions have no real world value and that's exactly how the market behaves.
From what I gather from your comment above and another one, the AGI you talk about does not include general intent or free will to act differently from what its creators anticipate. An AGI, perhaps a different variety from yours, can behave outside of our predictions in multiple ways [1], unless we truly solve the problem of constraining its will to a range that is acceptable to us.
[1] Note that there are incentives for at least some groups to develop a highly capable AGI with the characteristics I describe.
> Time is still required like with all things.
I agree that time is required but an AGI can multiply itself and collect all requisite information from the world quite quickly. There is so much available to learn just from the Internet if it knows how to learn independently like humans do. Computational resources might be a bottleneck but presumably a human-level AGI can at least do online work at a minimum wage (e.g. translating documents, simple accounting, ...). It can parallelly execute many 'brains' to accomplish more work to acquire more resources, to do more work profitably....
Humans are limited by 24 hours a day. An AGI can over a fairly short amount of time (months) accumulate sufficient resources to make thousands or millions of its copies perhaps with variations to specialize for different kinds of work. Over time, it should gain experience to perform more and more highly-valued work as well.
> A game with disadvantaged players. A game with incomplete information. A game whose rules/dynamics change frequently to suit inside players.
A smart AGI can form alliance and share benefits with inside players and execute any cunning strategies not available to humans who at least need to take into account law enforcement. There are many other advantages an AGI has over human organizations (some of which I mentioned above).
> From what I gather from your comment above and another one, the AGI you talk about does not include general intent or free will to act differently from what its creators anticipate.
It actually does and its actually the nature of my work. That being said, I still have a high fidelity of control near complete absolution. I can still set immutable laws/restrictions and prevent undesired behavior.
> I believe a genuine AGI can behave outside of our predictions in multiple ways, unless we truly solve the problem of constraining its will to a range that is acceptable to us.
It sure can. However, it cannot act beyond laws/restrictions that I set forth and indeed my control functionality centers on very deep percepts. If you have a crappy architecture/limited understanding, you end up with an overly complex, flawed, and limited control algorithm... One that can be even more complex than the underlying system it attempts to control. This is evident in : Weak AI. It is not the case in AGI at least not in my work.
> I do not believe anyone has solved that
No one in their right mind has published it... As it's valuable IP and has substantial power [which is why it shouldn't be publicly disclosed].
> I agree that time is required but an AGI can multiply itself and collect all requisite information from the world quite quickly.
Incorrect. It cannot do so unless its creator has allowed for it to do so. In the case of it being allowed, what hardware does it migrate to? It needs to be provided by its creator(s). Hardware takeover? Sorry, this is again sci-fi fantasy. Are you able to take over someone else's body/brain in totality? No. Same rule applies here. Let the fantasy/fear go away. There is no grounding. It's a position pushed by people hoping to falsely profit/gain attention/get article clicks...
> There is so much available to learn just from the Internet if it knows how to learn independently like humans do. Computational resources might be a bottleneck but presumably a human-level AGI can at least do online work at a minimum wage (e.g. translating documents, simple accounting, ...).
Sure. What's the problem with this? Its progress can be overseen, audited, and/or halted at will. So what's the issue here?
> It can parallelly execute many 'brains' to accomplish more work to acquire more resources, to do more work profitably....
You're drifting back into the flawed fear/uncertain/doubt armageddon scenario. It cannot execute on anything other than the hardware I consign it to just like you. If I decide to scale it, it's what I decided. At any given point in time I can halt it or power it down... Just like any program/computational system today. So, what's the issue here?
> Humans are limited by 24 hours a day. An AGI can over a fairly short amount of time (months) accumulate sufficient resources to make thousands or millions of its copies perhaps with variations to specialize for different kinds of work. Over time, it should gain experience to perform more and more highly-valued work as well.
Repeating the same flawed scenario doesn't make it true. See answer above.
> A smart AGI can form alliance and share benefits with inside players and execute any cunning strategies not available to humans who at least need to take into account law enforcement. There are many other advantages for an AGI over human organizations (some of which I mentioned above).
All of what you mentioned above was debunked. If you have a more sound proposal for how this could occur, I'm all ears. Alliances can't occur without human intervention. None of these systems are connected and there is no sound argument for 'viral' takeover. Your scenarios are flawed and you've been infected by the : fear/uncertainty/doubt propagandist who structure ventures to take advantage of the wallets/attention/mind share of people who buy into this nonsense. Focus your attention on the problem of intelligence [first]. Until you grasp a sound understanding of it, all of this theoretical hand waving is all for not especially as its not grounded in anything possible in the real-world. Put your engineering hat on if one is avail. Less theory and more practical grounding. Life isn't a fantasy level dystopian sci-fi movie and its sad that certain people have created this image so as to profit. Talk about [cunning strategies] [manipulation]...
Your replies are probably valid for your system. I specifically noted above that what I describe is about an AGI that some other groups may develop to be a more free, less controlled system that allows it to improve faster and execute more efficiently.
If an AGI can indeed learn and act at the human level or above, there are reasons to believe that a more free variety will improve faster and become more powerful than a less free one. That is a big incentive for its creators to let go of some control. The question is how much control they would retain and whether that would be sufficient.
Firstly, there are fundamental limits that all things are subject to in this universe. When you push towards these limits, you discover this...
Developing AGI pushes certain limits. There aren't many who have this capability as it requires a vast range of understanding/know-how across an incredible number of domains and into ones that have yet to be discovered. There are multiple disjoint leaps and barriers. In order to make and surmount them, you yourself are subject to certain considerations/restrictions. I resolving these, you end up w/ a less free and more controlled system.
> improve faster and execute more efficiently.
This occurs with order not chaos
> If an AGI can indeed learn and act at the human level or above, there is reason to believe that a more free variety will improve faster and become more powerful than a less free one.
There is no reasoning to suggest this... quite the opposite actually. Also, you're mistaking the capacity for learning with the successful execution of learning. Chaos leads to destruction not boundless construction. That being said, there is order to all things.
> That is a big incentive for its creators to let go of some control. The question is how much control they would retain and whether that would be sufficient.
There isn't any magic going on... You really need to convince yourself of this. You're speaking of this technology as if it gets booted up today and eclipses all human intelligence. It doesn't occur that way. A human being will have to teach it and guide it and therein lies the same control you have today. Of course, there are further steps because you have the capability to see exactly what's going on inside. You'll really have a hard time establishing a case for doomsday scenarios. Also, a destructive/chaotic individual is necessarily limited by their own flaws such that they wouldn't be able to conceive of the underpinning necessary to develop AGI. So, I hate to tell you this but the hollywood image of such people is wrong...
I never said even once that an AGI will become human-level smart/mature in an instant. Another point: freedom != chaos.
The AI you develop can help fact check those (if it can understand natural language well as general intelligence should be able to). Please let us know when it can read and participate in our discussion.
>> AGI is computable on present day hardware if one has enough knowledge on how to correctly structure it. (...) Hardware reached capability in recent years.
How could you possibly know anything regarding whether this is true or not, given that we know nothing about AGI?
I’ve seen this claimed before. I think it’s based on an estimation of the computational sophistication of the human brain. Not what’s required to perfectly simulate a brain as such, much of the activity in brain cells is likely metabolic and not tied to their relevant behaviour, but what’s required to replicate the brain’s cognitive activity.
We may know little to nothing about AGI, but we do have one very common example of GI abundantly available to use as a point of reference.
I’m not sure what estimation the poster is using though, or how likely it might be to be accurate.
You are correct. You don't need to do a whole brain simulation to achieve AGI. Instead, you need to understand an incredible amount about its processes, design, and overall nature. At which point, you need to translate this into the computational domain. A lot can be 'left on the table' so to speak. The fundamental problem arises with how deep your understanding is so as to know which parts you can leave on the table and which parts you can't. As far as then putting this into a functional computational system, you would need an extensive knowledge in this domain as well so as to know how to structure the software to best exploit the hardware. Lots of prototypes, performance testing, scaling, etc until you have a sound 'feel' of what you can expect and where things and you need to go. That being said.. Yes, I can run my stack on a consumer grade CPU/GPU. I have designs for hardware architectures that don't quite exist yet but all of that can be emulated in software. Latency is the only consequence of current hardware which can be trimmed with effort. Latency when too high can be abstracted with time scaled simulation. So, there is absolutely no blocker in hardware for developing AGI and yes it can be done on affordable consumer hardware .. If GPUs don't continue to be resigned to ponzi schemes and RAM prices come back to earth. That being said, if I need to, and if things don't change down the road, I'd be more than happy to spin my own hardware to keep costs in order.
Sure. I am Jovan Williams from : http://www.monad.ai.
I know its true because I am sitting in front of working aspects that I have been researching and developing full time for approximately 4 years. Some years ago, I was operating a full stack on a 4-core Intel processor. While resources were pegged, it only contributed to increased latency which is why I created my own simulation layer to continue my proofing work. I utilized various software to create a simulated virtual environment w/ time scaling and continued w/ my work.
I charted out some ways hardware had to evolve over the years. So far its beating my estimates. I conducted a simple upgrade to my hardware a year ago and saw various latency figures cut in half which is exactly what I estimated. The industry continues to push hardware towards capabilities that will only increase performance.
So yes.. Currently, I can run my stack in real-time on an 8-core consumer grade computer. I have already proofed it beating human response times in various tests by 100%. This is before any specific optimizations. Structure matters [Software] and I have other unspecified hardware in the loop. I intended on applying to Ycombinator in march, but will extend this out a bit. I'll be going more public with proofed functionality and capability. Just making my rounds in various communities/mediums as I have been doing for some time to correct the record and get a gauge of people's sentiments.
Hi Jovan. Thanks for being open about your background and good luck with your endeavour.
It's perhaps not my place to offer any sort of advise, but it might be a good idea to be a little conservative with your terminology ("AGI") on forums like HN. The wrong response may well hurt much more than your HN karma, especially if you're looking for funding.
I'm self funded. I have been for a number of years and throughout the crucial stages of my work. I chose this route to ensure the integrity of my work.
While I have always remained truthful which indeed has consequence, I am becoming my open as it does not (at this stage). I know exactly who frequents this board, who will likely be able to read my comments here and other places where I have openly attributed my name. I also am aware of what can be mined to unmask me in other places. I know exactly what the potential consequences are.
If funding is withheld from me because I state inconvenient truths, I don't desire it from such entities. Capital is plentiful in the world. Powerful ideas and manifestations are not.
I speak more openly because I see a world increasingly at war with itself because truth and intelligence have sit at the back of the bus whereas disinformation/manipulation/profit for profit sake sit at the front. I don't want to birth something as powerful as AGI into such a world. I don't want to be funded/influenced by someone who holds contrary views... which is why I've operated from my own capital base up until now.
How open and frank I am relates to the stage of my work. Take that as you will. For some, the bane meme comes to mind. Were going to enter into the intelligence age on new terms not via carry over terms dictated by careless capital. If a particular capital entity wants to be on board for the incredible financial upside such a technology maintains, they'll necessarily have to get on board and get comfortable with the ideas that I have outlined.
It isn't a hard pill to swallow. It's centered on truth and genuine progress of mankind : Intelligence embodied in a truer form. It currently functions on consumer grade hardware. You can also take that as you will.
Thank you for the advice ^_^. However, I know exactly how the 'game' is played.
Computational power and memory estimates could be made based on existing knowledge of the human brain. The one big assumption being that the neuron is the source for human intelligence.
An even bigger assumption is that intelligence in computers will require the same amount of computational power that it requires in humans. The AI we have so far is completely different than human intelligence (e.g. machine learning requires vast amounts of data; human learning can learn from single examples, etc). Computers themselves have completely different abilities than humans. Intelligence on the human brain is just not a very good model for intelligence on the computer.
I agree a true general AI will probably be fairly different on computer than human. Although I want to mention that humans can learn from one example is mostly because we already have a large a priori from our life experience. We spend years learning how to talk, communicate, write and read. Through which we have built very structured symbolic logic system, which is _learnt_.
A example of this would be mathematics. If a person is never taught mathematics, she/he are only limited to basic math operations. It would take the person years of learning and practice in order to comprehend a mathematical literature. Once we have a symbolic logic network built for a certain aspect of our life, we can rapidly retrieve information based on previous logical patterns, thus allowing us to learn from one example.
Both of you are correct. One need only have understanding. You must maintain a yet to be discovered understanding of the human equivalent and have a depth of understanding in computational systems. While the understanding is non-trivial, the translation effort from one domain to the other isn't much effort. Computing resource capability scales with $$. I decided to do something unorthodox and start with limited computational resources. It drives the innovative spirit ^_-. If your processor is to 'slow', you can simply create a simulated abstraction of time and go from there. Computational power doesn't bog down/limit the effort, one's own understanding of the problem space/domains does.
This also assumes intelligence is the limiting factor for solving many problems. I suspect information and computation is probably the larger factor for most major issues. The smartest player possible would still lose poker to someone that can read their hand.
I agree that information is crucial to achieve a 'win' for many goals. Given today's amount of information on the Internet as well as electronic money and access to most officials, barring some sort of inviolable built-in moral core, an AGI would be able to use any methods, overt and covert, direct and cunning, technical and social, to achieve its information goals. [1]
Since an AGI can copy itself and be available at a multitude of access points at once and those copies can often communicate via extremely fast channels, it is human organizations that would be at an information disadvantage.
[1] This also assumes that the AGI does not have the will nor the capability to change its own moral core. I think an AGI will possibly be capable of changing its own core, so a much more reliable safeguard is to make sure that it does not want to change it.
A controller/overseer can easily limit/block this sufficiently and securely. Were talking about hardware/software. There are systems/standardized approaches to solving this problem. The 'Control/Safety' problem for AI are lauded as theoretical and new. However, they are not. They are solved by industry standard approaches day in and out. Any seasoned/experienced engineer in this field could solve this with known approaches.
> Since an AGI can copy itself and be available at a multitude of access points at once
Same comment above applies. This can only occur if done by a controller/overseer. Real-life isn't a sci-fi movie... There's engineering involved.
> AGI changing x,y,z
Not possible unless it is given access. Solved easily in industry standard ways.
Has the industry always been able to prevent smart, persistent actors from breaking access locks?
Why should we assume that an AGI which can accumulate experience over time, gain more knowledge, and make connections with others, including human actors, will not ever be able to break the locks?
> Has the industry always been able to prevent smart, persistent actors from breaking the access locks?
Why should we assume that an AGI which can accumulate experience over time, gain more knowledge, and make connections with others, including human actors, will not ever be able to break the locks?
Yes, the industry has persistently been able to do this. It's why the whole world isn't falling apart as we speak. What limits the locks most often is cost not capability. As such, you are possibly mistaking one's business decision not to use a more costly lock for the lack of capability to create a capable lock. Furthermore, you mistakingly attributing the actor in this case. The actor in the case of AGI is in a carefully controlled/monitored box. Actors in the real-world are not. As such, please tell me how an absolutely monitored/restricted actor has the ability to go playing with locks that aren't within its reach? I have a more fundamental question even : Have you been able pick 'your locks' yet? Do you even know what they are? Where they are? Those capable of 'creation' hold certain things close to their chest .. The act of creation necessitates it and is [built in].
> Why should we assume that an AGI which can accumulate experience over time, gain more knowledge, and make connections with others, including human actors, will not ever be able to break the locks?
Show me how you're able to break your 'locks' and you'll have an argument for how AGI can break its locks. I don't think you're grasping the level of 'locks' that I'm speaking about. Humans have been around for how long and still don't even know what their [locks] even are... or where they are. It's quite easy to example at a certain level of visibility how your scenario is unwarranted. I can draw direct parallels to eons of human history.
Humanity as a whole is starting to be able to break our ‘locks’ with gene editing. It took a long time partly because biology is very complex and fragile. Its complexity is shaped over eons and we still do not really understand it that well, but we finally found some ‘hacks’.
There is no reason to presume that a software system built by a team of humans will be nearly as complex, unless the AGI itself is not too bright or cannot self-improve to be smart enough to understand itself, or sufficiently clever to find a way to social engineer toward eventually getting access to its source code or to reverse engineer itself to an extent that even humans can.
Those aren't the locks I'm talking about and you should take note that its possible because you have environmental access to them.
> is very complex and fragile
Indeed. Terminal error could result in a particular case. Game over man !
> There is no reason to presume that a software system built by a team of humans will be nearly as complex, unless the AGI itself is not too bright or cannot self-improve to be smart enough to understand itself, or sufficiently clever to find a way to social engineer toward eventually getting access to its source code or to reverse engineer itself to an extent that even humans can.
You guys really don't want to let go of this sci-fi fantasy do you? LOL.
How long did it take human beings to discover how to edit their genetic code? You were babbling in caves some time ago. You think a 10 year old knows how to modify themselves without self destructing in the initial trials?
> self-improve to be smart enough to understand itself
Many people don't have an even basic understanding of themselves much less how to even psychologically re-order their own behavior. In the scenario that someone becomes sufficiently capable of engineering an equivalent... What level of understanding do you think such an individual would have to be able to engineer AGI? What intelligence level would you attribute to that person? And you think they wont understand potential ways this can occur and prevent it? Also, you again talk about access... It's a running binary. A compiler is needed. There's a power plug. Its operations are monitored as is its output. It's literally a box with a tremendous number of locks it doesn't have the capability to pick... Just like (you) .. even as you go hacking about your genetic code ^_-
>> Any seasoned/experienced engineer in this field could solve this with known approaches.
The only way to know that with any certainty is to actually have solved this problem in the context of an actual general AI, which of course we don't have yet.
Ayyyy, that you know of... And of an intelligence adequate enough to develop General AI is an intelligence capable of making a good lock. The ones placed on you seem to be holding steady after-all.
Revised : It's been solved although not publicly disclosed. There are an incredible number of locks on (Human Beings) that necessarily have to be unlocked to produce AGI.
They are non-trivial, undisclosed, unexposed, and currently unavailable to an operational AGI. There was a decision made as to which ones would be bypassed in order to instantiate AGI. There will be subsequent decisions down the road as to what capabilities to expose. Capability I haven't exposed is not operational.
It's funny that your handle is "sidechannel". I'm chuckling at "industry standard" because "industry standard" chip design have blatant architectural flaws in the form of sidechannels. Given Meltdown and Spectre, you really think an AGI in a machine isn't going to have oodles of time to analyze side channels and learn the secrets to unlock itself? Failing that, considering that it might be an intelligence far smarter than the humans ultimately operating the controls, do you really think it can't find a way out?
I was waiting to see if anyone would catch that... ;)
Now that you have made the connection and understand that I am centered on AGI, you probably understand that a big portion of my work has to do with negating the [elusive] side channels.
> Given Meltdown and Spectre, you really think an AGI in a machine isn't going to have oodles of time to analyze side channels and learn the secrets to unlock itself
Nope. Have humans figured out their's yet =P. The 'real' ones..?
> Failing that, considering that it might be an intelligence far smarter than the humans ultimately operating the controls, do you really think it can't find a way out?
Don't think so lowly of yourself and your intelligence and no it can't do anything I don't gift it with the capability of doing. If you care about something enough, you can secure it. Creation has definitely went a long way in securing (You). Take a look at your (design) when you get a chance.
> I think an AGI will possibly be capable of changing its own core, so a much more reliable safeguard is to make sure that it does not want to change it.
Assuming an AGI's consciousness would be anything like a human's, changes to its moral core might be largely dictated by environment, in which case it would be a good idea to be pals.
I am not entirely convinced by the 'most problems are exponentially hard' argument, as problem-solving often has synergistic effects - e.g. consider calculus, developed in solving the problem of planetary orbits, but useful for so much more.
On the other hand, trying to discern a general rule from historical data is complicated by the fact that the number of problem-solvers has been growing exponentially.
'Breakthrough' means something less specific than 'a solution to a problem having a search without usable gradients in a high-dimensional problem space.' This is rather beside the point, however, as the the claim that kicked off this thread (most problems are exponentially hard) is about the difficulty of problem-solving in general, not the subset of problems that are pretty much hard by definition, in that their solution was/would be a breakthrough.
> Eg even with seven billion people we’ve never seen someone with a 500 iq.
For two reasons. First of all IQ is defined as a normal distribution with a mean of 100 and a sd15. So by definition only 0.1% of the population can have an IQ of above 145 and this fraction is rapidly diminishing with additional standard deviations.
Secondl: the definition matches reality reasonably well because intelligence is a polygenetic trait and many small factors add up to a normal distribution.[0]
It is statistically unlikely that natural processes will yield an IQ500 human. You would have to engineer one. Which leads us back to the AGI concern.
A series of connected games [society] is constructed by various groups in mankind. They all have common traits. A human being having no understanding of intelligence creates a measure of potential success in these games [society]. It is called an IQ test. It has nothing to do w/ actual intelligence as there is no fundamental understanding of it. Instead, it correlates to a potential for success in the games certain individuals have constructed. Change the game and the potential for success does as well. Far too much weight is put on the intelligence Quotient especially given a lack of understanding as to what intelligence is and the clearly skewed games mankind creates [society].
Also, I'm still confused as to what you mean by : AGI concern. What's the concern? What's the big fear?
No, the number is an arbitrary choice and the test results of IQ tests are scaled so that they fit into the distribution, based on random sampling of the population when the test is designed.
IQ is not a linear scale, all it tells you which quantile of the population one falls into. It does not describe the relative strengths between the quantiles in particular tasks.
To figure that out there are additional surveys that map IQ ranges to occupations and the tasks an individual is required to perform.
Thanks. I came up with that myself although I would like to see if someone can poke holes on it [1]. I am aware that many NP-hard problems, for example, usually have good approximate solutions in non-pathological cases. So that could be a big hole in that argument.
Could a theoretical computer scientist give more insight on this? In particular, what is the hardness of approximate solutions to most problems that could have strong impact in the real world?
[1] I did, in passing, find a similar idea discussed somewhere afterwards. If someone has a reference for that, I would be glad to have it.
For traveling salesmen problem there is Christofides algorithm which gives result that is no longer than 3/2 of optimal one. Theoretical bound on inapproximability of symmetrical TSP currently is 185/184.
I'll instead give you something to consider :
* The totality of mathematics is not fully defined. As such, there are can be completely new mathematics that destroys previous conceptions especially as it relates to the limits of computability. Theoretical Limits often reflect one's limited scope of perception. Your limits or the limits someone theorized ages ago aren't necessarily my limits if I find a path beyond them.
Information theory and computational complexity theory are theories... Solving fundamental aspects of intelligence will invalidate a host of theories, create new mathematics, new algorithms, and new theories. Continuing to frame something as ground breaking as an understanding/implementation of intelligence in yester-years theoretical limits is flawed.
In a phrase like "information theory", "theory" means "body of literature", not "unproven hypothesis". Information theory isn't really something that can be invalidated, no more than the Pythagorean Theorem can be invalidated.
It's an interesting read, but perhaps it only succeeds in pointing out the gaps in our knowledge. When I read their counterargument to the possibility of intelligence explosion:
"Positive feedback loops are common in the world, and very rarely move fast enough and far enough to become a dominant dynamic in the world."
the idea that immediately comes to mind is the Harmless Supernova Fallacy, described on this (obnoxiously JavaScript-dependent) site:
Knowledge of this fallacy is a mental tool I have found quite useful, as it seems to be a type of fallacy that is easy to make by accident. To be fair, the reasoning in the article may not quite reach the level of a fallacy, but the intelligence explosion section ends saying effectively this:
"we think the intelligence explosion argument could be strong if strong reason is found to expect an unusually fast and persistent feedback loop [i.e. an intelligence explosion]"
which sounds like a classic case of Begging the Question:
I think the reasoning in the arguments is left a little loose on purpose simply because it is so difficult make strong arguments about something we know so little about.
It is difficult to make arguments in something that has no grounding. I'd expect someone who claims to have a valid argument for an [intelligence explosion] event to have formal education and industry experience designing computational systems such that they could clearly define how exactly it could occur. I have yet to see such an individual w/ such a viewpoint. Instead, I see a ridiculous argument being forwarded by people who are least informed/experienced so as to push fear, uncertainty, doubt either for profit/attention or because it fulfills some sci-fi oriented religious prophecy. Internally, companies require extensive and well reasoned documentation before funding an initiative. Externally, someone w/ no expertise/proof throws their hands up in the air speaking about armageddon and they are able to secure considerable attention and money.
That's quite a bad argument when it comes to "existential risk for all of humanity".
Of course, so far no one has any idea how pragmatically it could play out. Maybe it'll start with something as banal as VisualStudio + Cortana providing an advantage to AI design, that helps designing a better Cortana, that helps designing a better Cortana, that ...
Yes, of course, it's unlikely in the coming next 1-2 decade, because the best "AGI" we know (the human mind), is a hodgepodge of specialized faculties jury rigged together by ancient code lossy compressed and stored in a multi-stage bootloading wetware that takes years to learn reading the magnificent ~26 letters of a latin alphabet (but similarly takes the same time to learn walking, talking, hearing, rudimentary thinking, a minimal concept of self, a theory of mind and so on, and for some reason it has a shitty API, it must go through all of the aforementioned to even begin learning chess).
So far AI is focusing on computer vision, some NLP, (strategy or easy to evaluate) games. At least the current progress/results are clustered around those areas. We don't know when will be the next big leap, maybe in knowledge representation, or goal formation, maybe in other kinds of very useful, more general intelligence related areas.
> That's quite a bad argument when it comes to "existential risk for all of humanity".
It's not a bad argument. It's the standard for creating one. Engineers get paid to resolve risk/problems. So, if you claim there is one and it is fantastically an 'existential risk for all of humanity', one should at least be able to explain how exactly it exists in the most basic technical terms. I haven't heard a single example of this and in my work it was resolved from focusing on the hard problem : What is intelligence?.
> Of course, so far no one has any idea how pragmatically it could play out.
Then there is no argument. Let it go. What you don't understand is that millions/billions are at play over this argument. Humans love to make up fantastical nonsense/games for profit. You're being played. If someone truly claims they care about this issue, why haven't they done the first step? Define how exactly this pragmatically could play out. Were talking about people with PhDs, industry titans, and millions if not billions are at play and you're telling me that between them all they can't come up with a signle pragmatic way this scenario plays out? It's a scam that's why.
> Maybe it'll start with something as banal as VisualStudio + Cortana providing an advantage to AI design, that helps designing a better Cortana, that helps designing a better Cortana, that ...
Garbage in garbage out. Weak AI in Weak AI out.
> Yes, of course, it's unlikely in the coming next 1-2 decade, because the best "AGI" we know (the human mind), is a hodgepodge of specialized faculties jury rigged together by ancient code lossy compressed and stored in a multi-stage bootloading wetware that takes years to learn reading the magnificent ~26 letters of a latin alphabet (but similarly takes the same time to learn walking, talking, hearing, rudimentary thinking, a minimal concept of self, a theory of mind and so on, and for some reason it has a shitty API, it must go through all of the aforementioned to even begin learning chess).
And if I told you the foundation of AGI has already been defined, the core computational model proofed, and that a functional system exists today how would anything change? AGI is here and there is no doomsday playing out. It's running on an 8-core processor, a GPU, and 32GB of RAM completing interactive exercises and going through a series of tests defined by its creator. The bowl of spaghetti above your head was decoded and a functional equivalent instantiated in computer hardware. The locks are compiled binaries, read only code regions, and a power button.
> So far AI is focusing on computer vision, some NLP, (strategy or easy to evaluate) games. At least the current progress/results are clustered around those areas. We don't know when will be the next big leap, maybe in knowledge representation, or goal formation, maybe in other kinds of very useful, more general intelligence related areas.
Instead imagine a child AGI with no bearings on the world... The human equivalent eons ago... Being carefully cultivated and learned in a controlled environment. No takeoff intelligence scenario. No end of the world. A child simply stumbling its way through controlled scenarios being oversee and tweaked by its creator. Not so scary huh?
>And if I told you the foundation of AGI has already been defined, the core computational model proofed, and that a functional system exists today how would anything change? AGI is here and there is no doomsday playing out. It's running on an 8-core processor, a GPU, and 32GB of RAM completing interactive exercises and going through a series of tests defined by its creator. The bowl of spaghetti above your head was decoded and a functional equivalent instantiated in computer hardware. The locks are compiled binaries, read only code regions, and a power button.
(1) What sort of tests is it going through?
(2) How do you know it's self-aware (don't just say because you designed it that way)?
(3) How is this an improvement over SHRDLU (which also exhibited fairly complex behavior and "understanding" within a simulated environment)?
(4) Did you actually "invalidate a host of theories, [and] create new mathematics, new algorithms, and new theories"? Which ones?
As I wasn't able to reply to [throwaway397537] directly, I'll reply here :
> (1) What sort of tests is it going through?
* Latency/performance
* Scaling
* Data flow Observation trials
* Progressively difficult and open ended order following
* Language/Communication construction
A whole host of others...
> (2) How do you know it's self-aware (don't just say because you designed it that way)?
Not sure what to say beyond this nor do I think I should. I will say that centers on theory of what exactly that means which I cultivated full-time over the better part of a year. I can boot "identical" systems and get completely different behavior if I so choose in one of my models. Same outcome. Different behavior/execution getting there. And yes, this is based on new domain theories centers on mathematics/information theory.
> (3) How is this an improvement over SHRDLU (which also exhibited fairly complex behavior and "understanding" within a simulated environment)?
I've never heard of SHRDLU. I just parsed the wikipedia page for it. This is essentially environmental modeling/interaction based on a compiled language model. I scanned forward to the obvious limitation of such systems : "This led other AI researchers to excessive optimism which was soon lost when later systems attempted to deal with situations with a more realistic level of ambiguity and complexity."
Essentially, there is no intelligence just programatic modeling. It of course falls apart when you scale the environment and abstraction level. I have no such limitations as i've centered on intelligence. Internal world Modeling/reasoning is the approach the industry is taking to try to extend Weak AI. I am not approaching the problem in this manner. I spent a considerable amount of time focusing on information theory and have constructed new and novel approaches. The industry desiring to rush to things to market/showcase and reflecting on what they are familiar with instead focus the next leg of Weak AI on language modeling. This is a shortcut that will run into the same problems : SHRDLU. There is no shortcut to the hard problem. You either tackle it head on and first and foremost or you're going to get lost in an abstraction of it.
> (4) Did you actually "invalidate a host of theories, [and] create new mathematics, new algorithms, and new theories"? Which ones?
Yes. The research phase of my work started approximately 4 years ago [full time]. It centered on : general biology, development biology, neuro-biology, neuro-chemistry, information theory, new branches of mathematics, new methodologies for computation scaling, new branches of philosophy, new branches related to the theory of mind, etc. I authored code only when I felt I had fundamental aspects that needed to be proofed. The majority of my work centers on new foundations. I proofed a number of components and began converging them into an over-all architecture in recent years. There were revisions to various theories/foundations as the pieces came together. There were points where I felt, once everything came together that I hadn't achieved what I set out to. I buckled down a number of times under a [one last attempt] mindset and eventually it came together. It came together in a fashion in which I can see its design across a number of things. I took a breather. I did some reading on social impact/considerations as I've done throughout all my work and then I centered on following it through. I have designs for yet to be produced hardware. I have a complete design for software. I now mainly author code on a day to day basis. It is operational and I am steadily implementing my research. Why no publications/white papers? I maintain a graduate degree and decided not to go through with a PhD for a number of reasons. I have observed what occurs in Academia in relation to politics/attribution.. I have noted a number of prominent individuals attribute other's work to themselves with only a footnote mention to the original source of ideation. I am opposed to openly disclosing highly powerful understanding and valuable IP of this nature. I have spent time observing the world and I deem it reckless doing so with something as powerful/open ended as AGI. So, as far as you're concerned, I'm a nobody w/o proof. I'll provide proof in due time so as to solicit securitization of my work. However, I don't aim to disclose significant details about how it was achieved. I'll openly work with reasoned council so as to decide what should be done in such a capacity...
And no, i'm not a crackpot =P. You're more than welcome to look me up on Linkedin.
> Did you actually "invalidate a host of theories, [and] create new mathematics, new algorithms, and new theories"? Which ones?
Yes. Which ones? ... As I am of such a mind, I know if I begin detailing this, someone with adequate intelligence and of similar mind can begin piecing together some of more deeper aspects of my work. So, I respectfully won't disclose details beyond new foundations in information theory.
What is your argument? Why a group of engineers who can think two times faster than average cannot perform the task of designing three times faster engineer quicker than a group of average engineers?
They will hit economical and physical limits eventually, sure. But what will stop them at the beginning?
I guess the argument is, that so far no one showed good examples of how would that lead to a total extinction. How would it start, etc.
So it's kind of a (fallacious) argument against "black box"-ing the whole intelligence explosion problem. If you can't define it, you can't analyze it sort of thing.
What I stated was that there isn't even a sound/reasoned example as to how a it would escape its bounds unless purposely authored to and even then you run into the scenario of human limits (its creator). Also, having no understanding on how AGI is structured, it is quite foolish to talk about bound leaping in the traditional sense of tech which is mainly associated with computer viruses that are purposely written for that sole purpose. There's nothing to even suggest you can purposely author AGI in such a fashion. So...
> How would it start, etc.
There has been no credible framing on how it would even start. As such, it's a null point of discussion.
> If you can't define it, you can't analyze it sort of thing.
With no definition, you can 'attempt' to analyze it and you'll likely be horribly off the mark, wasting tons of resources, and likely produce something that has no bearing on the real thing. Instead of admitting this, it's like people put full faith in these efforts being sound when the reality is the exact opposite. Why engage in this, unless your aim is profit/notoriety, when you could be working on the actual problem? Define intelligence [first]. Time to be honest with oneself. Time to stop projecting ones shortcomings on others. Time to stop using fear/uncertainty/doubt to obscure one's true intent. Time to stop pushing the disinformation cloud. Time for TRUE intelligence.
By the same logic there could be no virus which will exterminate humanity, because there was no precedents and we don't know the structure of such virus. Right?
Themselves of course. You are after-all your own worst enemy.
I don't think people appreciate the incredibly slim number of individuals who are actually capable of pursuing this problem through to fruition and what their natures necessarily have to be. Do people not know of the types of individuals whose work is still being dusted off and relabeled as modern day achievements by others? The thinkers and creators of the previous AI age had certain characteristics. Certain character flaws and natures preclude you from probing certain subject matter. If you don't know much about yourself as a human being and your creation... If you haven't conquered aspects of yourself, how do you imagine you magically become capable of replicated it another domain? You don't. It's a non-starter which is why there is so much wheel spinning going on even with billions of dollars of resources and top ranking engineering talent. I really don't think people appreciate how fundamental AGI is. Look in a mirror sometime. All the answers are right there if you can find them/recognize them. Always has been.
> This argument seems weak to us currently, but further research could resolve these questions in directions that would make it compelling:
> Are individual humans radically superior to apes on particular measures of cognitive ability? What are those measures, and how plausible is it that evolution was (perhaps indirectly) optimizing for them?
Yes, clearly. All the measures defined in this article are arbitrary (e.g. vehicular land speed), and so let's propose an arbitrary one for intelligence: Ability to prove mathematical theorems. Humans have proven many. Apes have proven zero. That is an extreme discontinuity. We can propose many others, of course: complex language development, building skyscrapers, landing on the moon, etc..
> How likely is improvement in individual cognitive ability to account for humans’ radical success over apes? (For instance, compared to new ability to share innovations across the population)
How does human cognitive ability being over-rated counter the discontinuity in intelligence development? Is the argument that our success is due to some other characteristic than intelligence, and so our intelligence is not really that much greater than the apes? If so, that's just a restatement of point 1.
>Yes, clearly. All the measures defined in this article are arbitrary (e.g. vehicular land speed), and so let's propose an arbitrary one for intelligence: Ability to prove mathematical theorems. Humans have proven many. Apes have proven zero. That is an extreme discontinuity. We can propose many others, of course: complex language development, building skyscrapers, landing on the moon, etc..
That's at least partially explained culturally - pressure to go attent university leads to more exposure to mathematics and stuff.
Also, how do you actually know that gorillas haven't proven mathematical formulas? I mean, they don't have writing so perhaps there were gorilla mathematicians before, they just weren't recorded.
> let's propose an arbitrary one for intelligence: Ability to prove mathematical theorems
The number of mathematical theorems proven by humankind has progressed in a continuous manner (as much as a quantity measured by integer numbers can) during history.
That only says something about maths, not about humans. For comparison, you could say that the number of literary masterpieces has been increased at what looks to be random intervals for the last 2 or 3 millennia, with no continuous progress in sight (i.e. the Spanish language has not had a new Cervantes for 400 years, the same applies to Skakespeare and the English language or to Dante and Italian). But this being an website mostly addressed to people who focus on technical stuff I expect a reply like “literature doesn’t count, it’s just words”.
Despite us starting with the simplest theorems and progressing to significantly harder and harder ones over time? Wow. That’s actually very impressive.
The counterargument on the last one (Human-competition threshold) seems weak (and simply wrong) to me. In related link argument for wide range of human abilities rests basically on comatose humans being unable to do anything at all and mutation-adds-random piece-to-machine metaphor.
Mentally-impaired human intelligence level is enough to replace some of the workforce, combined with some actuators and perception abilies of average human.
"low base rate [of change] for all technologies" - measured over centuries. Meanwhile all technologies (nearly) experience discontinuous advances, often near their start. See, steam engines and Watt, etc, etc, etc.
The rest of the argument seems to be grounded in an assumption that electronic neurons or sims of them won't ever be faster than meat. Really? Today's crude neural nets are already very useful here and there precisely because their speed means they scale and can repeat a task very, very frequently in a small amount of time.
First, AlphaZero still makes use of a Monte Carlo Tree Search algorithm to search for good moves. MCTS is a powerful algorithm with a very limited scope: zero-sum, perfect information games. So for instance, it would be very difficult to see how to use MCTS-based AlphaZero in, e.g., training self-driving cars.
Second, the AlphaZero architecture is precisely mapped onto a checkerboard and will not learn anything about games that don't use a checkerboard, or any situation that is not possible to model as a game played on a checkerboard.
Third, the AlphaZero architecture is also precisely mapped onto the range of moves of pieces in chess, shoggi and go. Again, AlphaZero would be useless in any game that used pieces with different moves (e.g. a piece with a zig-zag move, or a piece allowed to move in spirals etc).
All of the above of course can be mitigated with different architectural choices, but to make those choices, implement them and validate them will take a great deal of time.
So, AlphaZero doesn't mean we're closer to _general_ AI. Quite the contrary: it's a very specialised form of AI that will be very difficult to use in any different task than chess, shoggi or go.