Turning 2D Photos into ‘3D’ Worlds
A photographer/videographer friend e-mailed me this, describing it as simply The Future.
It's the work of PhD student Kevin Karsch and three other University of Illinois students. Here's the short of it, from the paper (PDF):
We propose a method to realistically insert synthetic objects into existing photographs without requiring access to the scene or any additional scene measurements.
The video shows how it works; it doesn't look much more complicated than Photoshop. Does it work? According to them, eerily well:
Surprisingly, subjects tended to do a worse job identifying the real picture as the study progressed. We think that this may have been caused by people using a particular cue to guide their selection initially, but during the study decide that this cue is unreliable or incorrect, when in fact their initial intuition was accurate. If this is the case, it further demonstrates how realistic the synthetic scenes look as well as the inability of humans to pinpoint realistic cues.
Obviously this sort of thing is already possible, but the simplicity and quality of it blew my mind, especially after the inserted objects start moving.