Headache-free 3D

Bob Pank#

Author: Bob Pank#

Published 1st January 2012


With over 30 broadcast channels now offering 3D programmes, and cinemas in practically every major town and city equipped to show 3D films, demand for stereoscopic content is increasing rapidly. Display manufacturers are now responding to this growing market with various new technologies intended to improve the viewing experience. A useful show for experiencing these was IFA, held in Berlin 2-7 September immediately prior to IBC. A lot of new thinking was demonstrated, including a laser-based domestic video projector from Mitsubishi and Sony’s HMZ-T1 dual-OLED screen 3D headset. Direct-view 3D displays were also on show but mostly on small-screen devices such as mobile phones, pocket games and, at IBC itself, 3D camera viewfinders.
Whatever display technology is used to deliver the 3D images, it is obviously important not to send viewers home with a headache, particularly before they have actually invested in a 3D system of their own. It helps a lot if 3D programme producers understand what causes such headaches because only then can they take effective measures to avoid them.
The brain routinely masks defects in eye, both in the 2D domain and in 3D. Most of this masking is performed so efficiently that we are usually unaware of it. Masking of the human blind-spot is perhaps the best example. Less obvious is the lens distortion masking that soon comes into play for anyone requiring convex glasses to correct long sight or concave optics to correct short sight. Both have a rounding effect on doorways, rectangular bookshelves and so on, corrected by the brain’s tendency to see what it expects to see rather than what the eye actually delivers.
In 3D vision, the brain has greater correcting powers than most people ever appreciate. This can be demonstrated by the simple procedure of creating your own stereoscopic-pair photographs, or viewing commercially-produced stereoscopic images with a 3D viewer. Some people can train themselves to focus directly on to such images, either in near-focus mode or by focusing further away than the actual image distance. Use the wrong technique and the image will appear quite literally back-to-front.
With a traditional 3D viewer, the left and right images are normally locked in an optimum position and the brain is reasonably happy, apart perhaps from being unaccustomed to correcting for errors in the viewing lens. Now the experiment can start. Cut the two images, or a copy, from each other with scissors and slightly vary the position and clockwise or anticlockwise angle of the right-hand picture while keeping the left picture steady. This is a highly unnatural load to impose on an innocent visual cortex but, with practice and within a limited range, all of these distortions can be accommodated. (The 'with practice' bit is vital.)
The brain’s 3D correction powers certainly don’t stop there. Using a single handheld 2D camera to shoot spaced 3D-pair images of motionless objects introduces many additional errors unless you happen to be very lucky. These include different left and right scale, caused by the two shots being taken at different distances from the subject. Keystone distortion (both vertical and lateral) are almost inevitable with handheld shooting and will obviously affect the left and right images in different ways. Colour balance is also likely to vary from one shot to another.
The surprise is not that these errors are easily generated but that the brain can, within a limited range, rectify them as a natural part of its ability to fuse images from our two eyes into a single solid. This includes the ability to allow for two eyes with differing optical characteristics. With practice, you can actually fuse two roughly similar looking and similar sized human faces into one. If the human visual processing system is over-stretched, the most reliable of our two visual channels becomes absolutely dominant and the brain then simply experiences the world in 2D, just as an individual with good hearing in only one ear experiences the audio world in mono.
My point in all this is not to encourage you to try hand-held single-camera 3D photography, unless you want to. (The accompanying image pairs are examples of what can be achieved simply by placing separately shot stills side by side in any photo-editing software and performing any desired fine adjustments.) Instead, it is to emphasise that a 3D imaging crew has three choices: get everything right during the initial shoot; attempt to fix minor defects during post-production; or simply let imperfect shots through and relying on the viewers to perform the final adjustments in their own heads. The latter option runs the risk of sending some of the audience away with a headache.
Developing the Cel-Scope3D stereoscopic signal analyser over the years continues to bring new aspects of 3D vision to our notice, usually in the form of feedback from Cel-Soft customers who have identified another parameter needing attention during the 3D production process. The most recent of these was edge violation, detailed two months ago in this column and arising when an all or part of an object (such as a lamp-post or the edge of a building) is visible in only one of the 3D image pairs. The brain has its own simple fix for this: the resolution and focal precision of human vision fall away drastically towards the periphery. Nobody would buy a camera with that kind of specification!

Related Listings

Related Articles

Related News

Related Videos

© KitPlus (tv-bay limited). All trademarks recognised. Reproduction of this content is strictly prohibited without written consent.