Edit Module
Edit Module
Edit Module
Edit Module

Turning 2D Photos into ‘3D’ Worlds

Four UIUC students present a method of “rendering synthetic objects into legacy photographs.” The short of it is that it’s mindblowing.

A photographer/videographer friend e-mailed me this, describing it as simply The Future.

Rendering Synthetic Objects into Legacy Photographs from Kevin Karsch on Vimeo.

It’s the work of PhD student Kevin Karsch and three other University of Illinois students. Here’s the short of it, from the paper (PDF):

We propose a method to realistically insert synthetic objects into existing photographs without requiring access to the scene or any additional scene measurements.

The video shows how it works; it doesn’t look much more complicated than Photoshop. Does it work? According to them, eerily well:

Surprisingly, subjects tended to do a worse job identifying the real picture as the study progressed. We think that this may have been caused by people using a particular cue to guide their selection initially, but during the study decide that this cue is unreliable or incorrect, when in fact their initial intuition was accurate. If this is the case, it further demonstrates how realistic the synthetic scenes look as well as the inability of humans to pinpoint realistic cues.

Obviously this sort of thing is already possible, but the simplicity and quality of it blew my mind, especially after the inserted objects start moving.

Share

Edit Module

Advertisement

Edit Module
Submit your comment

Comments are moderated. We review them in an effort to remove foul language, commercial messages, abuse, and irrelevancies.

Edit Module