- Parent Category: News
- Category: Technology News
- Published on Thursday, 30 May 2013 01:46
- Written by Gordon Meyer
The Holy Grail for 3D enthusiasts is to be able to shed those cumbersome glasses, especially in the home environment. Technology pioneers Dolby and Philips have now teamed up to make glasses-free 3D commercially viable for home entertainment. Dolby has been demonstrating the new technology for quite a while, most recently at the 2013 Consumer Electronics Show (CES) and Variety’s 3D Entertainment Summit. The advancements made since those events allowed Dolby and Philips to strongly promote the Dolby 3D process at the 2013 NAB Show. The presentation included an endorsement by James Cameron and an announcement of a formal working relationship with the Cameron Pace Group (CPG).
Autostereoscopic or glasses-free 3D displays are far from new. Prototypes have been displayed at CES and other trade shows and conferences for well over a dozen years, but, until recently, the technology hasn’t been ready for prime time. I recently sat down with Zaved Shamsuddin, the senior product-marketing manager of Dolby’s imaging playback broadcast business group, to learn more about the approach to autostereoscopic displays. One of the biggest problems so many prototypes have faced over the years has been the need for a relatively narrow-viewing “sweet spot.” The only way you can see a reasonable 3D image without distortion or ghost images would be to watch from a very specific place, as long as you didn’t move your head or shift your body position. “If you have to put an X on the floor and you can’t move your head in order to get an optimal viewing experience, that is not a solution,” says Shamsuddin.
For autostereoscopic technology work, it’s critical to optimize the content for a specific device. And because the characteristics of autostereoscopic are so different, you can’t just take that 3D image intended for glasses-based viewing and map it one to one. “You have to adjust many things to match the characteristics of that playback device, and you can’t expect a consumer to do that manually,” Shamsuddin explains. “It has to be something built into the system.” This is where the Dolby/Philips technology comes in, beginning with the acquisition of right-eye/left-eye image data, whether directly from native 3D camera rigs or footage that’s been converted. The key is something called “metadata,” which is shorthand for data about data. For example, with camera footage the data itself would be the image, but the metadata would include things like the date and size of the file; the dimensions of the image; what generated the image; etc. The Dolby 3D format embeds a special collection of metadata, including depth maps, into the image data stream so the filmmakers’ creative intent is preserved while optimizing 3D depth and transmission bandwidth. This is especially important for 3D broadcasts where bandwidth is much smaller than what’s available on packaged media, like Blu-ray discs.
Currently, most glasses-free display panels work by having multiple views of an image so viewers can enjoy a 3D image from a variety of viewing angles. For example, the demo panel used at Dolby’s Burbank, Calif. offices presents 28 different views, but the 3D image data that typically comes into a display panel is just left and right. Sophisticated algorithms are used to extrapolate depth maps from those two views. According to Shamsuddin, that extrapolation process can never be the same as having the actual depth maps embedded in the data streams. That’s a big part of what the Dolby 3D format intends to address.
The Dolby 3D format works best when it’s incorporated on an end-to-end basis from image acquisition right through to the Dolby 3D-equipped TVs, tablets and laptops in the consumers’ living rooms once the technology comes to market. That’s why at this year’s NAB Dolby announced two key partnerships that will help realize the objective of incorporating the Dolby 3D format throughout the entire workflow process as well as the distribution plus playback system. The first partnership was with the Cameron Pace Group, which is one of the premier 3D production services companies in the industry (not just for James Cameron movies but for many other productions as well). And although formal announcements have not yet been made, Shamsuddin says that Dolby is in talks with most of the other key players in content creation/distribution workflow to ensure wider coverage and support of the Dolby 3D format throughout the entire ecosystem.
On the post side, Dolby has an alliance with the Foundry, the UK-based company whose NUKE digital compositing software and OCULA plug-in for stereoscopic postproduction are widely used throughout the industry. Dolby and the Foundry are developing a series of plug-ins to easily embed Dolby 3D metadata into whatever 3D format a content owner wants, including broadcast, Blu-ray and streaming. “By partnering with the Foundry, we’re getting really good coverage and market share in the industry,” Shamsuddin notes. The first generation of the Dolby 3D plug-in is still in development, so many things, such as the user interface, are still being worked out. The goal is to make this plug-in as minimally intrusive as possible in the 3D post workflow.
While 3D workflow begins with left-eye and right-eye images, there’s actually much more information there, which Shamsuddin calls “raw metadata.” In the post process, the Dolby 3D technology organizes that raw metadata to do things like define and standardize depth maps for all distribution formats. All of that data is then processed through a Dolby 3D encoder to encode the information into a transport stream containing the video, the depth and metadata. Just as with Dolby’s audio encoder/decoder technologies, when 3D content is played back on a device equipped with Dolby 3D, a chip serves as both a Dolby 3D decoder and rendering engine. This tells the display device how to reproduce the 3D image exactly the way the filmmaker intended, especially on glasses-free displays that need to generate as many as 28 different views to provide the widest viewing angle possible for consumers.
Dolby 3D is designed to be hardware agnostic, so it can be embedded in any of the autostereoscopic display technologies currently in use, including lenticular array and parallax barrier systems. Declining to name the consumer electronics manufacturers that have signed on to become Dolby 3D licensees, Shamsuddin predicts that the first-generation prototypes of Dolby 3D-equipped TVs are likely to be shown at the 2014 Consumer Electronics Show, with the actual product arriving in stores one to two years later. While Dolby’s 3D-rendering engine will help make any 3D content look better, it will obviously work best with content encoded in Dolby 3D. TVs using this technology are still at least 18 months off, but Shamsuddin and his team are working hard to prime the pump with as much Dolby 3D-encoded product as possible through their work with CPG and the Foundry.