Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Did openAI really unofficially abandoned universe ?


Yeah, really interested in hearing their take on this. It's not often you see a Musk-sponsored enterprise cast a major project aside without public comment.


The main reason people in the AI community believe Universe has been abandoned is because the engineers who worked on it have been laid off, and also because none of the promised updates actually materialized. This doesn't preclude the possibility of a fresh non-VNC take in universe with a smaller team of course, perhaps also with more focus on benchmarking (like Atari, Labyrinth) than universality.


(Author here). I hadn't realized that the engineers were laid off. Where did you find this out?


Gossip at the RLDM conference.


It's because the people actually working on AI, including OpenAI, finally knocked some sense into Elon Musk. He finally realized how far behind AI is (it is a glorified linear regression) and we won't be seeing general AI for at least another 40 years.

Source: Am an AI research scientist.


I got my PhD in machine learning and NLP and did a 3-yr postdoc on deep learning.

My advisor shared the following wisdom with me: "When the experts in your field say that saying can be done, they are probably right. When the experts in your field say that something cannot be done, they are not necessarily right."


> When the experts in your field say that saying can be done, they are probably right.

Generally yes, but they may be significantly off on the timeframe. One famous example is that once alpha-beta search was invented (in the late 1950s), Herb Simon predicted that "within ten years a digital computer will be the world's chess champion". That did eventually happen, using techniques not even all that different from alpha-beta search, but it took 40 years rather than 10. Many of the 1980s neural nets claims turned out to be eventually vindicated too, but it took 30 years, which was quiet a bit longer than the optimistic portion of 1980s "connectionists" expected.

That's the type of skepticism I usually have with claims today too. When people say "there will be fully autonomous self-driving cars on the road by 2020", I don't doubt it'll happen, but whether it'll happen in less than 3 years I have more doubts about. You could argue AI researchers have gotten better at accurately predicting the timeframes of advances than they were in the early days of AI, but I'm not sure there is solid evidence of that (would be interesting if someone has studied it).


It can happen the other way around too though. Few people predicted the massive jump in AI ability the last few years. Notable AI researchers said it would take decades to get to human accuracy on imagenet, and they were wrong within a few years. I recall reading the first deep learning Go papers around 2015 and thinking that superhuman Go AI was inevitable in a few years. And when I discussed it with other people they were very skeptical and thought it was unlikely. And then AlphaGo came out...


> Few people predicted the massive jump in AI ability the last few years.

I'm guessing you work in image recognition, or mostly hear from people who work in image recognition.

There is more to AI, and not all of it is instantly improved by a convolutional neural net.



If I was having coffee with a PhD in ML and NLP who has years of experience in deep learning, I would ask this:

What are the most valuable, unsolved problems in the field?


So when you say "It's because...", are you in touch with people working there, or are you just guessing that this transpired because it seems like a reasonable assumption to you?


Would be interested to know how you reached that 40 years number. I don't think we are even remotely close to AGI, 40 years to me seems extremely optimistic. That's within my lifetime.


Probably the same way everyone does, by pulling it out of thin air as a guess. When nobody even knows what theoretical breakthroughs are necessary, you'll always end up with a scattershot all over the place, even amongst experts. Try asking working mathematicians how long until the Riemann hypothesis is resolved one way or another, or look at what people were saying about Fermat's Last Theorem up until it was solved.

What we do know is that current techniques won't get us close to AGI, so something new is needed (or perhaps like backprop, something old will work once we have enough compute power). Personally I'm bullish on AGI because I have strikingly low faith in the ability of evolution to operate very effectively as a tool for algorithm discovery, so I suspect that once we've hit the compute threshold we'll find that many different algorithms can do the trick, and 40 years is probably not out of the question for us to hit that point (or 10, or 100), depending who you talk to about what the compute threshold might be.

I'd caution against putting too much weight in what experts say, though, since with a tiny few set of exceptions anyone working on "AI" today is actually just working on narrow AI, which is, as someone put it, just glorified linear regression. Those tools will almost certainly be part of the solution, but only in the sense that the classical theory of Diophantine equations was part of Weil's proof of Fermat's Last Theorem - they are not the core of the theoretical approach.


Evolution has been running ~10^19 experiments in parallel for billions of years: http://reducing-suffering.org/how-many-wild-animals-are-ther...

Evolution is a slow algorithm, but it had access to an absurd amount of compute (all neuronal organic matter on Earth) and environment simulation (all of physical reality on Earth) when discovering us; so the discovery of the algorithms/architectures/principles in our heads shouldn't be viewed as trivial.


The massive compute/time advantage evolution has makes me bearish about AGI. We really need to fix our compute capabilities before we can start overruning evolution. The math dictates it'll happen, but exponentially slowly if we don't innovate in compute.


There's more to the story, too: advances on top of CRISP may give us better tools to self-improve the species, accelerating evolution.

Personally, I'm bearish about AGI because I believe we will eventually realize that the brain is a glorified linear regression too, with a custom wiring to help learn language and vision.


What do you mean when you say that the brain is a glorified linear regression?


>What we do know is that current techniques won't get us close to AGI, so something new is needed (or perhaps like backprop, something old will work once we have enough compute power).

With backprop we didn't just need bigger machines, we needed better algorithms, palliatives for the exploding-gradient problem that made values exceed our numerical representations, and then hardware specifically designed for doing the matrix-ops involved.

If I saw something capable of speeding up probabilistic program inference the way GPUs sped up backprop, I'd start saying we should expect to see powerful AI applications quite soon.


Better algorithms were invented because of bigger machines. Once computers got fast enough, researchers could experiment around with different algorithms on realistic sized models and datasets. Without waiting 2 years for the experiment to finish training.

Probabilistic programming isn't going to help general AI much. Things like dropout seem to work well enough, and for the most part AI is severely underfitting rather than overfitting. Our models are far to simple and small to really learn language and do complicated reasoning. Making them bayesian doesn't fix that.


>Probabilistic programming isn't going to help general AI much.

Excuse me while I laugh.[1,2,3,4]

>Things like dropout seem to work well enough, and for the most part AI is severely underfitting rather than overfitting.

For the most part, neural networks can't reason at all. They just induce deterministic functions over high-dimensional Euclidean spaces.

>Our models are far to simple and small to really learn language and do complicated reasoning.

They're also not compositional (new concepts as functions of old concepts), productive (able to draw an unbounded number of inferences from each representation), or unbounded in size of representation (unboundedly many concepts). Neural networks don't even represent causal structure, let alone model how an intervention will affect outcomes!

It is, however, really nice to hear an AI booster admit just how incredibly limited connectionist models actually are.

>Making them bayesian doesn't fix that.

No, changing to a causal, compositional representation that allows for productive and nonparametric (unboundedly large) learning does that. The Bayesian part just makes it extra nice by letting us "put information in" anywhere in the model (at any variable) by conditioning.

[1] -- http://forestdb.org/models/learning-physics.html [2] -- http://forestdb.org/models/word-learning.html [3] -- http://forestdb.org/models/arithmetic.html [4] -- http://forestdb.org/models/politeness.html


They're a scientist: https://xkcd.com/678/


That's a little sadder now that I've had the "fourth quarter next year" thing happen to me personally.


40 years is very pessimistic. The median estimate given by AI experts is in the 2040s. Moore's law will surpass the human brain before then.


I'd be interested in hearing more background here. Last time I heard Musk say anything about AI, he was still on the hype train to crazy-town, talking about the world-conquering things it would do in the coming decades that have nothing to do with what anyone's researching right now.

The idea that OpenAI could talk him down is pretty impressive, and if true I would significantly positively update my impression of OpenAI. (I thought OpenAI was funded by people on this hype train.)


Musk didn't say that AGI was close or that current research was particularly dangerous. He was worried about what might be possible in many decades.


Oh good, they're finally getting it. Now we can maybe have a nice AI winter for deep learning and clear the stage for the next few things.


Universe is purposedly being abandonned (a specific training framework) not OpenAI... But thank you for your valuable insight, we all know being an AI research scientists gives you a direct connection to Elon's brain.

Edit: And seems like you are wrong anyway, see top comment.


yes


They switched over to OpenAI Gym which is much broader in scope (able to play Steam based video games).


This question on Quora contradicts you, now I don't know who to believe...

https://www.quora.com/What-is-the-difference-between-OpenAIs...




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: