Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Photogrammetry can produce outstanding results with just the click of a button as long as the lighting is good and there's enough photos to cover every angle.

At the Stanford museum, made with open source Meshroom and a cell phone camera: https://sketchfab.com/3d-models/parvati-cantor-arts-center-a...



And you have a good surface texture.

I have off-and-on attempted to use photogrammetry to measure things with very little surface, and consistently failed.

There's a reason basically every photogrammetry example you see is of stone statues or organic structures (trees, dirt, things with lots of texture at every scale).


Yeah I tried to scan my hallway using Meshroom. It took about 3 hours to solve and completely failed on any of the white surfaces. Ok I guess that isn't entirely unexpected but it's not like they were completely free of texture. The solve time was also very disappointing.

Unfortunately there seems to be a huge gap between Meshroom (the best open source solution AFAIK) and commercial solutions like Matterport, or even iPhone apps (yes I know it has a depth camera but even so).

There's also a ton of research results that are way better than Meshroom but unfortunately they never have code that you can actually use.

This sort of thing: https://youtu.be/TZ1eToXQwN0


I wonder if it would work to project a microdot pattern on those kind of objects..

Obviously you lose the ease-of-use of using your cellphone then, though.


That's kind of what the original Kinect depth camera did. It projected a known constellation of IR dots to help with the 3D reconstruction.


The problem with that approach is that the microdot pattern has to be fixed. SFM type approaches work by doing feature matching between images to determine the correspondence between images. If the microdot pattern changes for each image, the correlation fails.

AIUI, the kinect and similar tools don't do SFM, but rather measure depth by looking at how the micro-dot pattern is distorted.


You could try cutting some colored masking tape into angular pieces, and sticking them on until there's enough places to track.


Good idea but I really shouldn't have to do that and it still shouldn't take 3 hours.

The problem is Meshroom is based on old methods from the Photosynth era.

I imagine eventually some researcher will go the extra mile and provide a modern algorithm in Meshroom, because the rest of Meshroom (tools, interface, etc.) is really good.


Could you explain why those are things you "shouldn't have to do"?

Who owes you this functionality?


It's shorthand for "I believe this is a problem that can be solved by the technology rather than requiring a manual workaround", not "I am entitled to a solution"


I worked in computer vision a few years ago and I was wondering if you could solve this with a camera flash. Let's say you take 2 pictures is quick succession, one with and one without a flash. Let's assume you know the intensity of the flash and that the 2 pictures were aligned pixel-to-pixel. Now, for each pixel, the colour difference between the the 2 pictures is going to be dependent on the albedo of the surface (for a white wall it's going to be relatively constant) and the distance from the flash. Further objects would have more similar colour (they would be less affected by the flash since they are further away), and closer objects would be more affected (since they are closer). You could write down the math for this and solve for "distance from camera" for each pixel.


> I was wondering if you could solve this with a camera flash

Yeah, I think you can - there's been a few "multi-flash 3d reconstruction" approaches proposed in the last 5-10 years. e.g. [1], possibly also [2]

[1] https://www.nature.com/articles/srep10909 [2] https://www.researchgate.net/publication/220829727_Shape_fro...


It is tricky because you also have indirect lighting.


If you subtract the 2 images the indirect lighting gets removed and you're left with only the flash, which depends only on albedo and distance


You're missing secondary reflections.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: