Back to Basics with Stereo 3D

Bob Pank#

Author: Bob Pank#

Published 1st October 2011


In 2011 more than 35 networks in Europe and North America will broadcast stereo 3D programming — and several will launch dedicated 3D channels. 40 3D features are expected to hit theatres this year, and stereo-3D-ready consumer devices from TVs to video cameras to smartphones are hitting the streets.
The demand for compelling content to meet the need is growing, and of course, stereo 3D work comes with challenges. Like everything else in production and post, creating ‘good 3D’ is part art, part science. Going back to the basics of the science can inform artistic and equipment decisions and keep 3D headaches at bay.
At its most basic level 3D is a lie, an optical trick. We use two cameras to mimic the way human eyes view the world and fake our brains into thinking we’re seeing a 3D scene. The harder the brain has to work to make sense of the illusion, the easier it is for 3D to fail.
The ruse starts with two cameras that can be rigged side by side or in a beam-splitter (mirror) setup. Beam-splitter rigs have two cameras mounted at a 90° angle and a 2-way mirror mounted in between. One camera shoots directly through the mirror while the second shoots the reflected image off the mirror. The resulting images are the building blocks for the trick. Here are some of the others.
Inter-axial distance — the space between the centerline of two cameras. It mimics inter-occular distance — the approximately 2.5” space between human eyes. Reducing and expanding this distance impacts your illusion because of convergence — the point in space where the two cameras are pointing (and where viewers’ eyes are looking). As a viewer converges on a point ‘closer’ to them, objects in the background double up, or diverge. Our brains can normally handle this, but if viewers stay converged on a close object for too long they’ll eventually get headaches.
In creating 3D, the point of convergence always corresponds to the plane of the viewing screen. Here’s where your rig choice comes into play.
Side-by-side camera rigs limit how close objects can be to the cameras because they can’t physically converge on the object. Mirror rigs have an advantage here, because you can reduce the inter-axial distance and converge on closer objects without creating a lot of divergence in the background.
Also, the fact that the convergence point is always at the plane of the screen means that some objects will appear to be in front of the screen (negative parallax) and some will appear to be behind it (positive parallax). Beware of frame violations, where objects in negative parallax extend outside of the image frame. These will break the illusion of 3D by creating a visual disconnect where the left and right eye images no longer make sense.
Maintaining alignment between cameras (and lenses) is critical. Color and/or luminance differences, vertical/horizontal alignment offsets, zoom differences and rotational offsets can all cause visual disconnects that force the brain to compensate.
Another consideration is the type of display that will be used for final delivery. Most stereo monitors require the left and right eye images to be combined, or muxed, together into a single image. The specifics of this muxing will differ between monitors. In addition, the size of the display will affect parallax. A separation of a few pixels might be minimal on a smaller display but on a 50-foot projection screen will result in a larger divergence.
Shooting tips:
Decide carefully whether a shot needs to have a heavy 3D look. Most don’t.
Neither shallow depth of field nor rack focus make sense in 3D. Because our eyes have a wide depth of field, it’s convergence that drives our focus. Rely on that to drive viewers’ attention.
Keep negative parallax to a minimum.
Avoid run-and-gun or handheld shots. The camera movement in stereo 3D can make viewers seasick.
Your best bet is to rely on the basics of filmmaking to make things look dramatic and get a great 3D shot: framing, lighting 3D composition, action, foreground objects and depth cues with composition.
Editing tips:
Typically, stereo 3D edits are made using left eye images only then screened in 3D to check. As with shooting, there are some things to be aware of when editing a stereo project.
Avoid fast cuts and swish pans. It takes time for eyes and brains to decipher 3D.
Be aware of the parallax positioning of objects between cuts. For instance, if you have a wide shot of two people with both in positive parallax, then cut to a closeup where one person’s face is in negative parallax, ease the change by managing and animating the convergence over a few frames.
Watch for frame violations and divergence and adjust as much as possible.
The key to working with 3D successfully is having tools that enable you to monitor and check your 3D while shooting, work with 3D footage easily in editorial, and switch between working in 2D and 3D for review and approval. That’s where we at AJA have focused our efforts.
For on-set monitoring, dual AJA Ki Pro Mini digital recorders can capture separate left- and right-eye images at full resolution, and even keep file naming synchronized. An AJA GEN10 can provide reference to the cameras to ensure signals are locked together. The signals can be sent to an AJA Hi5-3D Mini-Converter, which muxes the images together for stereo viewing on a 3D monitor.
In editorial, AJA’s KONA 3G video I/O card (with dual inputs and outputs) can be combined with Cineform’s Neo 3D codec to capture, monitor and output stereo 3D images on the Mac in full resolution. The KONA 3G can drive professional-level projectors or consumer-level monitors that use muxed signals.
The right tools, along with a firm grasp of the basic tenets of 3D, can help content creators do what they do best and realize new opportunity in this growing field.

Related Listings

Related Articles

Related News

Related Videos

© KitPlus (tv-bay limited). All trademarks recognised. Reproduction of this content is strictly prohibited without written consent.