Correcting Stereo 3D Volume and Focal Length in Post Production (Part 1)

In my previous post, I pointed out an excellent article by Phil McNally of Dreamworks.  Without rehashing my previous post,  it shows via visual intuition a subject which is very hard to explain otherwise — the interaction of lens focal length and stereoscopic “volume” and flattening of distant objects.

http://www.captain3d.com/temp/cml/cml_volume.html

Also, in my previous post, I threatened to describe ways to actually fix “flat/cardboarded” shots after the stereo camera has captured them.  This is not simply an issue of translating the eyes relative to each other (which just moves the apparent convergence point).  Cardboarding is a function of a problem with what’s known as the depth budget.  To oversimplify, the depth budget is the maximum amount of behind-the-screen-depth, plus the maximum amount of in-front-of-the-screen depth.  Sometimes this is expressed in percentage of screen width, aspect ratio, or in other cases in raw pixels of disparity between left and right eye images.

To change the depth budget itself, one must literally “reproject” the eyes from different camera positions.  In the CG World, this is relatively easy, you just “move” your virtual cameras around.  In the real world, after the stereo imagery has been captured, this is a non-trivial problem, because there is no “virtual” camera to move around after the fact.  Literally, what we’re talking about is changing the apparent interaxial distance of the two cameras AFTER the take.

Clyde DeSousa has a very clever trick in his post on realvision.ae: this approach allows one to reduce apparent interaxial, and ultimately the trick in this approach is that it uses existing plugins to calculate tween images.  This is also useful for generating multiview images for autostereo displays.  He also has an excellent video channel on YouTube, a couple of his videos show this process in action:

This works in the unfortunately common instance that too much camera interaxial distance was used for the type of shot and focal length of the lens.  In this case one simply needs to reduce the interaxial, and tweening is a good process to do so, as long as the tweening/optical flow engine driving it is extremely high quality and does not leave visible artifacts.  An example of such an artifact (and Clyde duitifully calls it out) is the stretching distortion seen at the top of the example shots he shows.

However, cardboarding is a circumstance where simply reducing the interaxial distance of the virtual camera does no good — in fact, one way to fix this would be to Increase it.  A better way would be to simulate the camera being moved closer to the subject concommitantly with a change in the apparent focal length of the camera lens system(s).  This requires some more work.  There are tools that can do this — Occula and Mistika are examples.  However, as a practical matter, they have fundamental limits to the severity of impairments they can repair.  I’ve heard it said very matter-of-fact in Hollywood post-production circles that if a shot’s depth budget is overshot by more than 1.5% or so of screen width, the shot is not fixable.  Further, if the shot is cardboarded, it’s not fixable, period, end-of-story.

However, it IS possible to fix these shots.  That’s for the next post.

Advertisements

~ by opticalflow on July 24, 2011.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

 
%d bloggers like this: