- Parent Category: Cinematography
- Category: Cameras
- Published on Thursday, 17 May 2012 15:23
- Written by Carl Mrozek
After nearly two years of seeing sports televised live in 3D in the U.S., it’s easy to lose sight of the fact that it’s only been five years since the 2007 NBA All-Star Game in Las Vegas, which was the very first live 3D sports event. It was also the first sports event that was multi-cast in 2D as well as 3D. “We fed the left-eye signal to an auditorium in the Mandalay Bay Hotel where it was shown to a small audience via twin 4K projectors,” recalls Steve Hellmuth, the NBA broadcast operations and technology EVP who spearheaded this and other early NBA 3D efforts. “The quality of the uncompressed 3D signal was amazing.” NBA has long been broadcasting events in 3D, and ESPN 3D featured 17 live NBA games during the 2010–11 season. ESPN 3D now beams a diverse menu of 3D sports, ranging from boxing and basketball to the Winter X Games, into thousands of homes via DIRECTV.
Despite learning so much about the technology and technique of live 3D sports production over the years, it has been an ongoing challenge for productions to squeeze yet another TV crew onto the busy pro-sports arena crowded with a dozen cameras and crews amid numerous still photographers and reporters. Moreover, 3D production entails large 3D mirror rigs rather than compact cameras. “It’s very expensive to add new camera positions by killing seats, especially courtside,” Hellmuth explains. “We already did that to establish all of the HD camera positions that we now have. Our objective was to build on and work around that without killing more seats for 3D.”
One solution was to share 2D camera positions with 3D cameras, but two camera operators sharing one position didn’t work. The alternative approach was to have one cameraman operate both the standard HD camera and 3D camera rig — a formidable feat of multitasking. When Hellmuth put that challenge to 3D live-production pioneer Vince Pace, the result was the CAMERON | PACE Group Shadow D Rig, a side-by-side 3D camera rig mounted beside and synchronized with a standard 2D sports camera and lens (often a long, telephoto box-style lens). The Shadow D rig was designed to be operated by a 2D cameraman. “Initially we asked them to frame shots wider for 3D, but they often forgot so the 3D shots were often cropped too tightly,” says Pace. “With Shadow D, we factor that in and widen the 3D shot [relative to the 2D] ourselves. However, we can also tweak it as needed.” According to Pace, the Shadow D rig has been used widely for live pro and college sports coverage in the U.S.
“The Shadow rigs helped us to avoid more seat kills with our 3D [NBA] coverage,” says Hellmuth. “That was critical courtside, where the priciest seats are and where we already have multiple camera positions established. They’re all close to the action so we don’t need to use long focal lengths very often. That dovetails well with 3D where you want to be close to the action and to frame fairly wide. It’s all about capturing the fan’s perspective as seen from the ‘Jack Nicholson’ seats in the first few rows of seats courtside. That’s also where many of our key camera positions are, under both baskets, plus the slash cameras in opposite corners.”
Another way of integrating 3D into 2D game coverage is by using footage from the 2D cameras. Some directors believe that intercutting a modest amount of 2D footage actually heightens the impact of bona fide 3D shots. “Your brain gets bored and fatigued from repeated 3D shots,” explains Nagamitsu Endo, executive producer for Japan’s Cosmomedia America, a special division of NHK that has been doing live production in 3D for nearly two decades. “[The brain] actually responds better to 3D if you rest it by intercutting some 2D shots. Every situation is different, so you have to see what is feasible and what makes the most sense.” Sometimes the company provides a 2D crew and other times it utilizes 2D shots from some of the host network’s cameras.
Successfully mixing 2D and 3D shots requires technique and finesse. “Some shots don’t have much depth value, like the wide full-court view and closeups with our ultra-telephoto box lenses,” notes Hellmuth. “Hence, we’ll often take shots from those [2D] cameras and run them through a 2D/3D convertor and intercut it into the 3D broadcast. They generally work fine, especially if kept short.” ESPN 3D Coordinating Producer Phil Orlins has used 2D shots in 3D sportscasts, like football games where ultra-long lenses are the norm. “Of 10 to 12 cameras covering a football game, eight [may be] 3D but a few of them will be 2D especially for extreme telephoto shots,” says Orlins. “Our longest 3D zoom lenses max out at 44 times versus 100 times in 2D. Either way, at full telephoto the image is going to be pretty flat. Using a shot from a 2D camera doesn’t make a huge difference in terms of how it impacts the viewer, especially if we cross-convert it properly. Capturing the emotion on the athletes’ and coaches’ faces is an integral part of our coverage in 2D or 3D. If we can, we’ll get it with a 3D camera, but, if not, it’s better to get it in 2D than not at all. Either way, the depth of field is pretty shallow.”
According to Joe Signorino, Sr., a senior project engineer for NEP Broadcasting (which built the first live 3D trucks for sports), maintaining some depth when covering sports in 3D with long (telephoto) lenses has always been a challenge. “If you don’t adjust the interaxial distance between a stereo pair of long box lenses very carefully, you can get some strange distortions, like pronounced separation between the foreground and background parts of the image,” says Signorino. “At first we had to cross-convert to 3D because there were no long 3D zooms available.” Many sportscasts still cross-convert long telephoto shots to 3D despite having longer 3D lens options. “It still comes down to a finite number of camera positions at many venues, whether it’s courtside at the U.S. Open or along fairways at the Master’s Tournament where cabling is an issue,” notes Signorino.
Producers and DPs have found that good 3D shooting techniques can run counter to the training that many cameramen received for covering live events in 2D. “Fast pans and zooms make you sick in 3D,” notes Signorino. “We train our camera operators to go slow whenever panning, tilting [or] zooming…. A good 3D [viewer] experience is paramount.” Endo always prefers to work with his own seasoned 3D crews, whether his is the only crew at an event or one crew of two or more. “If the budget allows, we always try to keep the 3D and 2D crews separate, because they have different mindsets and styles,” he says. However, if a client insists, Endo will train local crews for 3D shoots, if he can do it properly. “It takes time and experience to shoot good 3D, but I have trained many local crews to do it,” says Endo. “The key is spending enough time, including a dress rehearsal before the big event. It makes a big difference if they do some serious shooting and can review it on a 3D monitor before the real event, so they can see what works and what doesn’t.” Endo has seen some 3D live-broadcast disasters as a result of having 2D crews on 3D cameras with too little advance training. “In some cases, people got headaches, left the theater and demanded their money back,” Endo recalls. “It takes more than an hour of training to expect good results.”
New technology may help to bridge the 2D-to-3D training gap. The Cameron/Pace Group’s new Shadow D rigs key on the 2D cameraperson to drive the 3D cameras linked to them. “The Shadow D system reads the zoom and focus from the 2D operator’s control to drive the 3D zoom and focus,” explains Cameron/Pace Chief Technology Officer Patrick Campbell. “3D convergence and tilt can also be driven automatically from the 2D operator’s focus control. They were first used on ESPN’s Summer X Games and later on the U.S. Open for CBS Sports and boxing for HBO Sports. The future of 3D production lies in integrating the 3D show with the 2D show as much as possible. The Shadow D system allows the use of shared resources, camera positions and operators.”
Another big issue for 3D sports producers is that most legacy camera positions aren’t selected with 3D in mind, but Orlins prefers to make the 3D production his primary task. “We do a lot of 5D production, meaning 3D and 2D at once,” he explains. “My approach is to focus on the 3D, using 3D cameras whenever possible. I prefer to take the left-eye camera feed for our 2D broadcast, rather than use a separate 2D camera. It’s less distracting and lets us concentrate on capturing great 3D.” Orlins’ approach to 5D production is working well, as the 2D camera positions in basketball, boxing, tennis and other sports are pretty close to the action and can be shot fairly wide. “[This] is ideal for 3D and for our style of 5D production,” says Orlins, conceding that some 2D cameras may be used well into the future. “There are no super slow-motion 3D cameras available, so we’ll be using [2D] Phantoms for the foreseeable future, especially in sports like basketball and football.” Nevertheless, Orlins constantly scouts out new 3D cameras. ESPN 3D crews are already using new 3D camcorders, like the Sony PMW-300 and Panasonic AG-3DA1 and newer AG-3DP1 for interviews, standups and B-roll footage. “Our goal is to make the viewing experience as satisfying as possible by using all the tools at our disposal,” Orlins says.