Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This got lost between my brain and my previous message, but yes, I agree, that's the most likely explanation.

But I mention the "they're trying to build like the equivalent of the early table-based Microsoft Surface (before that meant a tablet), but pseudo-holographic instead" idea just because it's so unbelievable to me that stand-alone goggles can possibly deliver what they're claiming.



Stand-alone goggles make more sense than a table, because they're closer to the eye and can more directly manipulate what the user sees.

If you read through the depths of Magic Leap's site combined with their patent applications, it becomes clear that they are trying to develop a set of goggles which combines some form of projection onto the retina [0] with some form of selective blocking [1] (to give contrast, and prevent the projected images from appearing as hazy mirages over the light otherwise reaching the retina).

Especially the blocking would be impossible to achieve with anything but goggles.

[0]: https://www.google.com/patents/US20140071539 [1]: https://www.google.com/patents/US20130128230


I think the idea is not that they wouldn't have goggles—these lab prototypes have goggles—but that they might be much more constrained than a free, walk-around head mount. Think: Imagineering-like controlled experiences down to arcades rather than a personal walk-around device like Glass^n or Rift AR. Controlling both the environment and background they augment and the head movement (and not worrying as much about miniaturization) could make the problem easier enough to be manageable.

Still unclear. They do seem to imply they want it to be an unconstrained mobile AR device, but that is indeed ambitious enough to warrant skepticism. Walk-around tracking for home/anywhere is still an unsolved problem for Oculus, and overlay AR is at least several times more demanding.


I think if you're tracking the user's position, then it may be practical to beam different images to each eye for a good 3D experience.

That's actually how (kinda) Stephenson's VR system he describes in Snow Crash works.

This kind of gets away from many of the issues you have with the Oculus Rift where you have such a tiny window of time (20ms or so) to react to how the person is moving his head. You still have to change the image based on the user's position, but not as much on head rotation which is the really hard part. Just each eye's location in space matters.

Multiple users would likely require multiple projectors.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: