Hacker Newsnew | past | comments | ask | show | jobs | submit | rirarobo's commentslogin

fyi, I don't have enough karma to flag, but all comments from the user you're replying to seem to be direct chatGPT outputs, with consistent summary-detail-concluding question structure, superficial attempts to sound profound, and plenty of em dashes.


Oh, yeah. They got me. Normally I'm pretty good at finding those.


MGX also isn't a US entity, it's a UAE sovereign wealth venture

https://www.mgx.ae/en


I feel like it's also important to remember that the truth may be something else completely.

I'm reminded of Bob Lee's killing, when many initial reactions assumed it was a random act of violence and blamed city leaders for progressive policies and decarceration, but it later turned out to be personally motivated [1][2].

Similarly, in this case, the truth may not be suicide nor company-initiated morder, but something else entirely.

1. https://x.com/all_in_tok/status/1644752577475805185

2. https://apnews.com/article/bob-lee-cash-app-nima-momeni-tria...


In my experience, ChatGPT4 has been able to do this very accurately for >1 year now. Gemini also seems to perform well.


> But referring to all these "4D dynamical worlds" sounds overhyped / scammy - everyone else calls 3D space simulated through time a 3D world.

In the research community, "4D" is a commonly used term to differentiate from work on static 3D objects and environments, especially in recent years since the advent of NeRF.

The term "dynamic" has long been used similarly, but sometimes connotes a narrower scope. For example, reconstruction of cloth dynamics from an RGBD sensor, human body motion from a multi-view camera rig, or a scene from video, but assuming that the scene can be decomposed into rigid objects with their individual dynamics and an otherwise static environment. An even narrower related term in this space would be "articulated", such as reconstruction of humans, animals, or objects with moving parts. However, the representations used in prior works typically did not generalize outside their target domains.

So, "4D" has become more common recently to reflect the development of more general representations that can be used to model dynamic objects and environments.

If you'd like to find related work, I'd recommend searching in conjunction with a conference name to start, e.g. "4D CVPR" or "4D NeurIPS", and then digging into webpages of specific researchers or lab groups. Here are a couple interesting related works I found:

https://shape-of-motion.github.io/ https://generative-dynamics.github.io/ https://stereo4d.github.io/ https://make-a-video3d.github.io/

All that considered, "4D dynamical worlds" does feel like buzzword salad, even if the intended audience is the research community, for two main reasons. First, it's as if some authors with a background in physics simulation wanted to reference "dynamical systems", but none of the prior work in 4D reconstruction/generation uses "dynamical", they use "dynamic". Second, as described above, the whole point of "4D" is that it's more general than "dynamic", using both is redundant. So, "4D worlds" would be more appropriate IMO.


Are there other, more up-to-date, resources you would recommend?


The Deep Learning Interviews book (more specifically volume 2, based on the proposed contents) in the other thread is much more representative of ML interviews for candidates with at least an undergraduate level of machine learning training.

https://news.ycombinator.com/item?id=41084834

Note machine learning engineering is very different from model and data work, i.e. designing the experiments. There are plenty of jobs where you package Nvidia drivers and pytorch files into docker containers, or write low level C++ to e.g. implement a transformer network on a new device architecture. Those require nothing more than a cursory knowledge of machine learning, and you can essentially get away treating them as magical black box matrix multiplication formulas. Very few companies can actually afford the 7 figure salaries for actual frontier level machine learning research.

For example, if you want to run a GPT model on some obscure graphics chip, you are better off hiring a C++ computer graphics/embedded engineer to do it than a typical academic trained ML researcher. The engineer can implement a GPT model simply by building out the matrix multiplications, and can do a better job without even knowing what an activation function is.


Just to clarify, one founder on the board, Ilya, has skin in the game, and was the reason behind Sam's firing.

He convinced other members of the board that Sam was not the right person for their mission. The original statement implies that Ilya expected Greg to stay at OpenAI, but Ilya seems to have miscalculated his backing.

This appears to be a power struggle between the original nonprofit vision of Ilya, and Sam's strategy to accelerate productionization and attract more powerful actors and investors.

https://nitter.net/GaryMarcus/status/1725707548106580255


I think you're right that this move by Ilya is to refocus on the non-profit mission, so less urgency to commercialize and sell products, and instead more focus on core research.

Based on Ilya's statements, it doesn't seem like safety was the main motivation in the decision, so maybe there's a hope that they'll be more open, but it doesn't seem to indicate either way, so possible that they'll stay tight lipped regardless.


Based on Ilya's comments [1] it sounds like it was more about a disagreement around the original non-profit mission, and making sure that AI benefited all of humanity.

Perhaps Ilya felt Sam was focusing too much on profit and power with his recent world tour and then dev day? Regardless, it's certainly rare, for one of the core scientists to maintain control over their creation, rather than the other way around. Typically the VC business guy would be pushing the scientist out.

1. https://nitter.net/GaryMarcus/status/1725707548106580255


It would be weird for this sort of disagreement to reach a feverpitch that would see the CEO sacked without some much better model in the background though.

Like Dev Day was characterized as "too far". How ? How is that interfering with the mission to benefit all humanity? It's all very weird.


How is profit-sharing chatgpt stores benefiting humanity?


Funding further development in to AGI in the long term and creating new ways to make people more efficient in the short term?


Attracting developers arguably benefits humanity, and one way to do that is by sharing profits.


Yeah, no idea, I can only speculate, maybe Dev Day was just the final straw, or perhaps it was related to APEC and the whole world tour that Sam did to make deals with powerful actors, or maybe it was related to the rumored tender offer?

https://www.nytimes.com/2023/10/20/technology/openai-artific...


I don't think Ilya is jealous, I think he just fundamentally is more devoted to the original non-profit mission, and the AI research.

Sam is a VC guy who has been going on a world tour to not just get in the spotlight, but to actually accumulate power, influence, and more capital investment.

At some point, this means Ilya no longer trusts that Sam is actually devoted to the original mission to benefit all of humanity. So, I think it's a little more complicated than just being "jealous".


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: