Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This explains all the technical details - it is far more advanced than you would imagine:

https://www.youtube.com/watch?v=z_hm5oX7ZlE



Wow, that's a lot of work! And yet it seems a shame to not capture more information than a single RGB color value per pixel. I can think of three other types of information you could capture: 3D information (which they did capture but only at a coarse level), the BDRF, and hyperspectral information. Of these the BDRF seems most important for viewing of paintings, and I'm surprised that there's no consideration of that given the thoroughness of the rest of the project.

With the BDRF you would be able to show the painting as it would appear under any lighting condition and any viewing angle. This would make the painting appear much more realistic in a VR rendering, for example. The current stitched photo assumes uniform lighting and a straight-on view, and if you simply mapped it to a texture on a plane in a 3D environment it would look flat and lifeless.


The implication is that the image tiles are overlapped, so the edge of one tile falls near the center of another. That might provide enough parallax information to do a 3D reconstruction. I wonder if the raw images and metadata are available?


It might be enough. Would be a hard algorithm to code though, don't you think?

Do you think they should have just used stereo cameras in the first place?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: