By Gordon Meyer
While multi-channel sound has been around in movie theatres since the original 1940 roadshow release of Walt Disney’s “Fantasia,” it was Dolby’s encoding and stereo optical technology introduced in the mid-1970s that made multi-channel sound a practical reality for filmmakers and audiences alike. The original Dolby Stereo was an analog matrix system that featured Right, Left, Center and monophonic Surround channels. Engineers have since built on that audio model by adding more channels, initially splitting the surround track into right and left rear channels for 5.1 and the expanded 7.1 which adds right and left side channels, though ironically enough, 7.1 audio is found a lot more in consumer living rooms than cinemas.
But when it comes to creating a truly immersive audio field that accurately reproduces the virtually infinite number of directions that sound can come to the human ear, engineers have long acknowledged that channel-based sound modeling is a compromise. Dolby’s chief competitor, DTS, addresses the desire for more naturally immersive sound field with their 11.1 channel Neo:X system. I’ve heard it demonstrated at the Consumer Electronics Show and it’s very impressive. DTS adds front and side height channels to further envelop listeners.
DTS faces some important challenges in making their technology a new standard. First, DTS is no longer in the theatrical sound business. Their technology is totally focused on the consumer space. But before consumers can be persuaded to invest in upgrading from 5.1 to 11.1 channel sound systems, they need to experience 11.1 first and their local Best Buy isn’t exactly the best environment to show off high end audio. The other challenge is one of “chicken and egg,” in that no matter how good a technology like Neo:X is on paper, the reality is that it’s only as good as the content you can get that’s natively mixed for it. Sure, part of that technology is designed to extrapolate all those extra channels from existing 5.1 or 7.1 data streams, but it’s the audio equivalent of converting 2D imagery to 3D. You’re simply not going to get the same quality or listener experience as you would with audio natively mixed in 11.1.
Now comes Dolby with the announcement at this week’s CinemaCon convention in Las Vegas (formerly the National Association of Theatre Owners’ ShoWest) of a new technology called Dolby Atmos, which in the simplest of terms adds a series of ceiling-based speakers in the center of the auditorium and creates a scalable master audio track that can handle up to 64 discreet speaker feeds that is fully backwards compatible with simpler sound systems.
According to Dolby’s press release, the new technology “introduces a hybrid approach to mixing and directs sound as dynamic objects that envelop the listener, in combination with channels for playback. Dolby Atmos enables adaptive rendering to ensure that the playback experience is as close as possible to the creator's original vision in any given environment, irrespective of the specific speaker configuration in the playback environment.” Dolby claims that their Atmos technology essentially enables them to place sound just about anywhere in a three dimensional half sphere sound field, giving filmmakers much more accurate control over that audio dimensionality.
One of the ways that Dolby Atmos achieves its audio immersiveness is that it enables sound to be directed to any specific speaker in the house. Ironically, this was the very approach that RCA engineers used when creating the Fantasound technology for “Fantasia” in 1940. Of course, thanks to digital technology, Dolby’s engineers have taken that concept to a whole new level.
At the end of the day, it’s going to be the filmmakers themselves who make or break Atmos by how effectively it’s used. Exhibitors will invest in the new equipment either because someone else is paying for it or because the Dolby Atmos audience experience is going to sell more tickets for them. It’s really that simple. In 1977, thousands of theatres installed Dolby Stereo equipment so they could play “Star Wars” and reap the box office benefits, even though the technology was introduced two years earlier.
Another part of Dolby’s challenge is to demonstrate that the difference in audience experience between a film presented in the now standard 5.1 mix and Atmos is as dramatic as the difference between mono and stereo or two channel stereo and 5.1 surround. For that, Dolby will probably need several “killer app” releases that really show off their technology to audiences and exhibitors alike. But once they reach critical mass in terms of the number of movies using this technology and the number of screens around the world that have it installed, filmmakers will have yet another powerful storytelling tool to create immersive experiences for audiences.
By Gordon Meyer
Last night my alma mater, the USC School of Cinematic Arts, hosted an evening with Brett Ratner for current students and alumni. I first met Brett probably ten years ago at BookExpo America where he was promoting his photography book. As I recall, Brett had one of those photo booth machines in his home that would take a quartet of pictures of whoever was sitting in the booth and process them while you wait. The book was a collection of probably over a hundred of the photo strips taken in his home and consisting of a who’s who of Hollywood.
At the time we met, I was producing and hosting a live talk show at ArcLight Hollywood called “Hollywood’s Master Storytellers” and Brett graciously accepted my invitation to talk about the making of “Rush Hour.” He regaled our audience with great behind the scenes stories about working with Jackie Chan and Chris Tucker as well as some very inspiring insights about the business. We both had such a good time that we brought him back a few years later to talk for a DVD launch event promoting “After the Sunset.”
It had been a few years since I last saw Brett, but since I remembered how good he was in front of an audience, I happily sent in my RSVP for the event. While he and the evening’s host Jeremy Kagan touched on a lot of subjects, the underlying theme always came back to what I call the “Three Ps: Passion, Persistence and Preparation.”
Brett talked about how, at the age of eight, he not only knew 100% that he was going to go to film school when he graduated high school, he would specifically go to NYU because that’s where Martin Scorsese went. No other school was an option for him. Then he told the story of his admissions interview at NYU and how, because his grades were sub-par, the admissions officer refused to look at any of the films he brought to showcase his talent and essentially told him, “No way, Jose.”
But when Brett’s passionate about something, he doesn’t take “No” for an answer. He immediately made his way to the Dean’s office and insisted on seeing him right away, “as a matter of life or death.” When the Dean agreed to see him, whatever Brett said worked because a few weeks later, he received a letter of acceptance from the NYU Film School. Persistence and passion paid off. And it wouldn’t be the first time for Brett as he illustrated with several more anecdotes.
Then there was the issue of Preparation. As Brett said several times, “Luck is when opportunity meets preparation.” When he graduated from NYU, although he had the opportunity to start directing features right away, albeit low budget ones, instead he focused on honing his craft as a cinematic storyteller by making music videos, each one telling a story with a beginning, middle and end. Only when he had enough of those under his belt to feel confident in his filmmaking skills did he tackle features.
When he took on directing the feature “Family Man, he passionately wanted Nicolas Cage to star. So he did his homework by first finding out about some of Cage’s passions and making sure they were incorporated into the screenplay and then identifying key scenes in the screenplay and giving Cage a DVD with scenes from movies like “Kramer vs. Kramer” to illustrate the tone he wanted. Those two tactics are what ultimately sold Cage on making a film that so many industry wags were sure he would pass on.
Yes, these stories and the others Brett told were both informative and inspiring. But at the end of the evening, what I really came away with was how critical a role that passion plays in our business. Brett often commented on the number of his NYU classmates who were much more talented than he was. But he loves what he does so much that when he really wants something, he won’t let someone telling him “no” stand in his way and he does his homework so that when opportunity knocks, he’s ready.
His passion was a very human reminder of why I’m in this business and his stories of persistence and preparation vivid reminders of what it takes to make it. Thank you Brett. I hope we’ll work together very soon.
By Gordon Meyer
A colleague of mine is in San Francisco this weekend for a special feature of the San Francisco Silent Film Festival and I wish I were there with him. It’s the North American Premiere of French filmmaker Abel Gance’s restored 1927 masterpiece “Napoleon.” I first learned about “Napoleon” perusing the Cinema articles in my family’s Encyclopedia Britannica way back in the day. It was lauded as a brilliant film that had been lost over the years, a ghost of cinema past.
But the film wasn’t quite as lost as many had thought. Since its original 1927 release, the film has been cut and recut several times, with running times ranging from Gance’s “definitive” nine hour, twenty two minute version to MGM’s 1929 American version that ran an hour and fifty one minutes, with an even more truncated version running under an hour that came out for home projection in 1935. Film historian Kevin Brownlow literally spent decades tracking down prints and missing footage while Gance was still alive and in 1981, with backing from Francis Coppola, presented a four hour version that played at Radio City Music Hall in New York and the Shrine here in Los Angeles accompanied by a live sixty piece orchestra with a score composed and conducted by Carmine Coppola. This is the version I saw (appropriately enough on Bastille Day) with Coppola himself sitting a few feet in front of me in a balcony aisle.
So why is this 85 year old film so important to contemporary filmmakers? It’s because, as the recent success of “The Artist” reminded everyone, that silent films are, in many ways, offer a very pure cinematic experience. They tell their stories almost exclusively through pictures (and a handful of title and dialog cards).
Gance was a pioneer. He put his cameras on horseback, swung them from balconies and generally moved them in all sorts of innovative ways. While a lot of contemporary filmmakers use complex camera moves as a form of cinematic pyrotechnics that seems more about calling attention to itself than engaging audiences in the story, to me, Gance’s innovative camera work added a totally appropriate kinetic energy to the film.
But it’s the last 20 minutes of the film where he pulled out all the visual stops. Suddenly, the curtains part even more as two additional screens are revealed using Gance’s three-screen Polyvision technique to tell the remainder of the story. Sometimes, Gance used the three screens to show one extremely wide image (roughly a 4:1 aspect ratio), foreshadowing Cinerama by almost 30 years. And sometimes he presented a trio of complementary images, again foreshadowing the kind of split screen techniques that began to be used in the 1960s.
For the lucky audience members who managed to snag tickets, Abel Gance’s “Napoleon” offers one of the most powerful and moving examples of pure cinema I have ever seen. Its influence will continue to be felt for generations to come. Reportedly a digital restoration is in the works. Although I still have a laserdisc copy of the 1981 restoration, I can’t wait to see this more complete version, whether on a big screen with a live orchestra as I did over 30 years ago, or on a Blu-ray disc at home. It’s a remarkable film from which contemporary filmmakers can still learn valuable lessons in the art of visual storytelling.
Earlier this week, I attended a meeting of the International 3D Society at RealD’s screening room in Beverly Hills to catch a screening of the Sand & Sandal epic “Immortals,” which I had missed during its initial theatrical run.
Considering who was hosting the event, I erroneously assumed that “Immortals” was shot in native stereoscopic 3D to begin with. I was wrong. Although that was the original plan, it seems that the additional logistics involved in shooting 3D native were slowing down production considerably and the film’s director, Tarsem Singh Dandwar, likes to shoot fast. After a few days of shooting with 3D rigs, the delays led to the decision to abandon the 3D rigs, shoot conventionally and then convert.
While the conversion was farmed out to several companies, the lion’s share was done by Prime Focus World’s facilities in Hollywood and Mumbai. As an aside, this is the same company that handled the 3D conversion for “Star Wars Episode 1” that Lucasfilm and Fox released theatrically in February.
The discussion following the screening provided fascinating insights into the art and craft of 3D conversion. In spite of the fact that many home 3D flat panel displays offer real time 2D to 3D conversion, as do at least three Blu-ray player programs for PCs that I’m aware of, there’s a lot more to doing a successful conversion than simply applying a computer algorithm to automatically simulate a stereoscopic image.
For many, it begins with the decision as to whether to do a “two eye” or “one eye” conversion. With the latter, the digital technicians treat the existing footage as the left eye image and then extrapolate what would have been captured by the camera representing the right eye. For “two eye” conversions, the original footage is considered a composite center image (kind of like a ghost center audio channel when listening to two speaker stereo) and the technicians then extrapolate both left and right images from that center image, generally resulting in a more realistic final result. Whether it’s a single eye or dual eye conversion, background imagery that would otherwise be obscured by objects in the frame needs to be painted in, frame by frame.
One of the things that panelists from Prime Focus spoke of with pride was the way they digitally sculpted objects (especially characters) to give them dimensionality. I’ve seen a number of 3D conversions that reminded me of the old View Master slides we used to play with as kids. Sure, there was depth, but it was basically a series of flat images floating in front of each other. Digital sculpting technologies mean that, when you’re looking at a human face, for example, the stereoscopic image reveals the natural contours of the face.
Using just these examples, is it any wonder that 3D conversion can cost as $100,000 per minute or even more? For “Immortals,” making an educated guess using industry standard figures, that conversion added at least $11 million to the cost of making the film. Mind you, overall, the conversion was very well done, but mightn’t have looked and felt better to audiences had it been shot stereoscopically to begin with?
If you talk to 3D heavyweights like Jim Cameron or Michael Bay, there’s no question. If you’re going to present a movie in 3D, it’s always better to shoot it that way to begin with. But then you have filmmakers like Tarsem Singh and Tim Burton who shoot in 2D and then convert because they believe they have more flexibility with 2D cameras than with often cumbersome 3D rigs that have to accommodate two cameras.
During the post screening milling about in the lobby that so often accompanies these events, I casually polled several of the 3D experts present about Singh’s experience. They pretty much confirmed what I suspected to begin with. While there are more moving parts involved in 3D production than 2D, with proper preparation and planning, including a 3D savvy camera crew and seasoned stereographer, it’s absolutely possible to do a shoot in 3D almost as quickly as 2D. For some reason, I flashed back to my childhood and my time as a Boy Scout. Seems their motto is just as valuable now: “Be prepared.”
A few months ago, I had a meeting with a colleague where we discussed an upcoming project of his – a high profile music event that would pay tribute to an iconic performer. The details of this event are irrelevant for purposes of this column. What is important is that, though my colleague was originally envisioning this event as a broadcast special, I immediately saw something much bigger.
“If it was my show,” I told him, “I’d shoot it in ways to future proof it because you’ve got evergreen content that’s going to be in demand for decades.” My recommendation was to use 4K 3D rigs to capture this concert and include a theatrical release for one version.
Yep, here’s your friendly neighborhood Gizmo Guy touting the benefits of 3D again, even though stereoscopic features are still more the exception than the norm.
Let’s take a moment to enter our time machine and travel back to the mid-1950s, when color TV was in its pre-infancy. Everyone knew the technology was being developed and would eventually hit the market, but the FCC had yet to approve a standard.
Meanwhile, off in a mythical realm called Burbank, there was a visionary wizard who had made a career out of identifying exciting new trends and making sure he was ahead of the curve. If you haven’t guessed already, I’m talking about Walt Disney, who at the time was simultaneously embracing two future trends – television and theme parks. While most of the industry perceived television as a threat, Walt saw it as a powerful promotional and branding tool and became the first major studio to embrace the new medium.
His “Disneyland” anthology series played to top ratings at two networks for decades, airing a mix of original programming and serialized theatrical features. Even though there was only black and white broadcasting when the show first went on the air, Walt insisted in filming all his original programming in color to future proof his content. Smart move, Walt! When the FCC approved RCA’s technology as the national standard for color television, NBC (then owned by RCA) aggressively pushed to get high profile color shows on the air so there would be content for people to watch in color. Since Walt had already filmed all the “Disneyland” shows in color, when the time came for him to move to NBC, he was well prepared with a library of proven, popular entertainment already in the new format.
Let’s come back to 2012. High def has finally become the standard for all broadcast networks and many of the cable networks. Now, 3D is in that transition stage moving from novelty to norm. While we’re still very much still in Learning Curve Mode on the creation of quality 3D content (more on that in an upcoming column), my crystal ball indicates that just as color and multichannel sound went from novelties and event releases to standard filmmaking tools, it’s just a matter of time before 3D gets to that point as well.
I reminded my colleague that, if you shoot a 3D event properly, you already have high quality 2D footage that you can use in just about all media. Further, thanks to constant advancements in technology, it won’t cost that much more to shoot in 3D and edit in 3D as it does in 2D. He called my bluff on that and challenged me to secure bids for his project. While I got a broad range of prices from qualified production houses, the ones on my short list gave me pre-negotiation prices that weren’t that much more than the producers were planning to spend to begin with.
The 3D experts I spoke with told me that, if the show was shot right, adding a stereoscopic component to the post process would probably add less than 20% to the post budget. Now, for just a modest bump in the below the line budget, we’ll have a show that can go out theatrically in 2D and the more lucrative 3D, on 3D Blu-ray, on one of the dedicated 3D cable networks and much, much more. Meanwhile, you still have all the distribution options that a 2D shoot would enable.
What do you think Walt would do?
Not surprisingly, I spent a lot of time at this year’s Consumer Electronics Show looking at the latest and greatest in flat screen displays. Last year, we saw prototypes of the first 4K home displays and I gave the opinion that, given the size of most flat screens in the home, not only is a 4K display overkill, I also expressed skepticism that very few people would be able to see any kind of difference, let alone enough of a difference to justify the higher cost of 4K technologies, especially when most digital cinema projectors use 2K technology on screens a hundred times the size of a large home display.
While I’m still skeptical, some of what I saw this year leads me to soften my stand. First of all, let’s define exactly the difference between 4K, HD and 2K DCP. For those not familiar with the language, digital image resolution is defined by a grid that’s X number of pixels (i.e. “picture elements”) high by X number of pixels wide. HD images measure 1920 pixels wide by 1080 pixels high. Using television terms, that translates into 1080 horizontal lines of resolution – six times the 720 x 480 resolution of DVDs. 4K Digital Cinema’s 2160 wide x 3996 high image yields an image with roughly four times the resolution of HD.
In terms of the audience experience, using film terms, VHS is roughly the equivalent of Super 8mm; DVD is comparable to 16mm; Blu-ray akin to 35mm and 4K is like 65/70mm. The more pixels you have, the more detail becomes visible to the naked eye and the fewer artifacts you’re likely to see. For film, those artifacts take the form of grain. With digital, it’s being able to see the pixels, something that’s pretty easy to see with consumer digital projectors if you sit close enough to the screen.
At CES this year, JVC Professional Products showed off the GY-HMQ10, a $5,000 palm sized 4K camcorder capable of shooting 3840 x 2180 resolution in 24p, 50p and 60p frame rates (strangely no 30p option). The camera can store up to 2 hours of footage on a 32GB SDHD card, according to JVC’s posted specs.
On the display side, a number of exhibitors, including Panasonic, Toshiba and Vizio, showed off 4K and the even higher resolution “Ultra HDTV” displays capable of handling resolutions up to 7680 x 4320, even higher than standard 4K Digital Cinema and according to NHK Science and Technology Research, the Japanese R&D think tank that developed UHDTV, this gives the new format roughly 16 times the resolution of standard 1080p HDTV.
However one clear benefit of these new 4K displays is visible when it comes to 3D using RealD compatible passive displays. As I’ve said before, one of the trade-offs of passive vs. active display technologies is that the passive displays have to essentially cut the horizontal resolution in half so that, instead of the full 1080 line images each eye would see with active shutter systems, with a passive display, each eye only gets 540 lines. While the further back you sit from the display the less noticeable this drop in resolution becomes, it is a quality compromise.
Since the new 4K displays start with a much higher resolution to begin with, even passive displays enable viewers to enjoy close to 2,000 lines per eye. And yes, since there has yet to be any native 4K content available for home viewing, until that happens, these sets have built-in upscaling to simulate 4K resolution.
Now we have a classic “chicken and egg” situation as there has yet to be any native 4K, much less UHDTV content available to run on these higher resolution displays. One exhibitor had a side by side comparison of their 4K and 1080p displays where there was clearly more detail visible on the former. But until I can see either a side by side comparison using 1080p and 4K commercial Blu-ray discs, I’m still skeptical as to how much of a difference the average Joe will be able to see, even on home displays as large as 80” or more.
Hollywood’s PR machine is bemoaning the recent defeat of the anti-piracy bills that were pending in Congress thanks to the grass roots campaigns of online heavyweights like Google and Wikipedia. There is no question that piracy is a serious issue that needs to be continuously addressed. But, as many have pointed out, those bills were so draconian in the way they dealt with the issue that there would have been a lot of collateral damage caused legitimate businesses
Just with current laws and law enforcement attitudes, people are already getting hurt. For example, Lily is a filmmaker catering to a niche market of film buffs. For our purposes, the specific type of material she produces is irrelevant. What is important is that, until a few days ago, she used a site called Megaupload.com to securely host her finished productions so she could easily provide customers a download link when they made a purchase and so she could send the files to others she worked with if they were sharing the content. Imagine her shock last week when she attempted to upload a file, only to find a government notice on the page, stating Megaupload and its affiliates had been seized by the Department of Justice due to piracy. “My account and those of many legitimate users are gone, just poof!”
The DOJ’s rationale for seizing the site was finding evidence that a number of people were using Megaupload as a convenient way to distribute bootlegged copyrighted material. And no doubt, a portion of the site’s customers were either uploading or downloading unauthorized copies of movies and more. But what about the millions of people who used sites like Megaupload for legitimate reasons, who no longer have access to their own copyrighted content? For that matter, what about people who may have used that service to archive very personal files that are now being pored over by DOJ investigators – a violation of their privacy.
But let’s get back to the issue of piracy, shall we? As I said, this is absolutely a legitimate concern for everyone who makes and markets content. It’s also important that we be realistic about what’s out there and put the threat and the potential consequences in perspective.
There are two points I want to make. First, realistically, that no matter what digital rights technologies are developed and no matter what kind of draconian laws are introduced and enforced, people will always find ways of securing and distributing unauthorized copies of copyrighted content. Secondly, the entertainment industry has a history of panicking and overstating the potential damage of new technologies they initially see as threats.
Back in the 1950s, as television got more and more of a foothold in homes, Hollywood execs cried that the glowing tube would destroy the motion picture industry by siphoning off audiences. In time, they realized that television could not only be used as an effective marketing tool to promote movies (Walt Disney was an early master of this), it would also prove to be a valuable revenue stream as broadcast rights to movie packages became a lucrative business.
In the late 1970s, several studios notoriously sued Sony for the introduction of the Betamax VCR out of fear that, giving consumers the ability to record programs off the air was an industry-threatening violation of their copyrights. Within a few years of the Supreme Court ruling against the studios, those very same studios discovered to their delight that the very technology they had portrayed as an industry killer was actually a significant source of new revenues. Imagine!
Back in the early 1980s, I managed a video store directly across the street from Grauman’s Chinese. At that time, pre-recorded videocassettes of studio movies routinely sold for as much as $100 (in 1980 dollars!). The studios worked on the premise that since this was strictly a rental medium, so they needed to rack up as much revenue as they could up front with high sticker prices. Because of those high prices, people would often rent a movie so they could make an illegal dub for their own libraries by connecting a pair of VCRs. Early copy protection technologies were easily circumvented with readily available black boxes.
Studios made extravagant claims about how much money they were losing by piracy even then. They erroneously based their projections on the premise that every illegal copy of a movie would have otherwise been purchased at full price when, even though only a small fraction of the people who obtained bootlegged copies would have actually purchased those titles at full price.
Then Paramount Home Video embarked on a bold experiment – to offer the hit movie, “Star Trek II: The Wrath of Khan” for the sell-through price of $39.99. At that price, tapes flew out the door like nobody’s business. The experiment proved that when you offer a quality product at an attractive enough price, consumers will happily buy legitimate copies instead of going to the time, expense and hassle of bootlegging.
We're in a similar situation now. The Internet and broadband offer new opportunities for studios to develop additional revenue streams, at the same time others choose to illegally distribute that same content. To effectively address the issue of piracy, the studios would benefit greatly from finding out WHY people download movies and offer cost-effective, legitimate alternatives that are so inexpensive, easy to use and provide such high quality, that it’s simply not worth it for most consumers to put the time and energy into looking for illegal, unauthorized downloads.
The more studio executives understand and effectively address the elements that motivate most people who download bootlegs, the better they will be able to dramatically reduce piracy without resorting to the kind of draconian measures that were represented in the SOPA and FIFO bills and that caused financial harm to people like Lily, whose online distribution resource was seized without warning.
The 2012 Consumer Electronics Show (CES) ended over two weeks ago and I’m still sorting through a flood of information, including 30+ press kits on USB thumb drives. This year, Variety made its presence known big time. In addition to publishing a special “Entertainment Matters” version of the magazine during each of the show’s four days, they also hosted one of their increasingly popular “summits” featuring all-star panels of high level showbiz execs talking about the impact of entertainment on technology and vice versa.
While the CE industry and Hollywood have worked hand in hand since the 1920s (if not sooner), it seems to me that new technologies have motivated even stronger ties and alliances, especially when it comes to content delivery systems and new strategies to enable consumers to more easily access that content in ways that create new revenue streams for the studios and other content owners.
With that in mind, so many exhibitors this year kept talking about “the cloud,” that I began to wonder how many of them really knew what it was, other than the buzz word of the week. More to the point, what’s in it for the consumer and why is it important for P3 Update readers?
In really simplified technical terms, “cloud computing” is a way of enabling users to access data (including high def audio/video feeds) seamlessly from a variety of sources so that both the computing power needed to store and transmit the information, as well as the storage of the data itself, can be spread out over a lot of different systems.
But of course, consumers are just as uninterested in how cloud computing actually works as they are in how your local utility manages to deliver electricity to their homes. All they care about is that it works. In terms of the consumer experience, when cloud computing works properly, accessing movies, TV shows, albums and the latest in computer software from “the cloud” is as seamless and as much a no-brainer as accessing something directly on your hard drive. In fact, for consumers, “the cloud,” is essentially a glorified external hard drive, except that the data being accessed (including entertainment content), is physically stored in a bank of computers located all over the country, and quite possibly all over the world.
Studios are already training consumers to get their content from cloud systems with the introduction of the Ultra Violet service as a means of accessing the digital copies of movies that are now commonly part of what you get when you buy a DVD or Blu-ray disc. But I see this technology being embraced in the very near future as a way to deliver digital cinema files to movie theatres, instead of the current method of either a satellite transmission or shipping a physical hard drive. It’s cheaper, more secure and enables distributors to more accurately monitor theatrical screenings, including a time/date stamp every time a file has been accessed and played.
The good news for filmmakers is that, as the next step in the evolution of digital cinema distribution, the cost and quality benefits of cloud technology should make it easier for indie product of all types to get screen time because it’s that much easier and more cost effective to make DCPs available first to exhibitors, then directly to consumers on a VOD/PPV basis.
Cloud computing also has the potential to make post production faster by uploading freshly shot footage from off-site locations, which is immediately accessible anywhere in the world that the editor has access to a broadband connection. For example, let’s pretend this technology was available when John Huston was shooting “The African Queen” on location in Africa using a digital Panaflex camera (or something similar). Not only would there have been no more waiting for the film to be shipped back and forth hundreds of miles just so everyone could watch dailies, Huston’s editor in Hollywood could have conceivably had a rough assembly cut and available to watch while he was still on location, thousands of miles away.
Cloud computing was one of several technologies showcased at CES that has significant implications for P3 Update readers. I’ll share more with you as I continue to wade through and digest what I saw.
Here I am, wading through close to 200 emails from publicists wanting me to set booth appointments at the 2012 Consumer Electronics Show that begins in less than two weeks. I’ve gone through this ritual pretty much every year for ages and it’s worth it. You see, CES is much more than a trade show for gadgets and gizmos. It’s a showcase for tools to create and display human drama in virtually all media and a preview of where much of our industry is heading in the near future. You also never know when you’re going to find a potential gem.
For example, there’s a company called Fulton Innovation that’s going to be showing off what they call “wireless power” technology. One of the applications of this technology described in their pitch email describes “magazines with cover art that lights up and flashes while sitting on a shelf, as if by magic.” I’m already picturing how this capability can be adapted for practical effects on-set.
Then there are the traditional exhibitors showing off the latest and greatest in audio and video technologies, including 3D HD camcorders, 4K ultra HD home displays and 11.1 channel sound systems. That’s not a typo boys and girls. DTS will be promoting their eleven point one surround sound technology that incorporates height as well as horizontal speaker placement for an even more immersive audio experience.
Our friends at Tiffen will be there showing off the latest gels, filters and Steadicam products for consumers and pros on tight budgets alike. With so-called internet connected “smart TVs” becoming increasingly popular, we’ll also be seeing a growing number of vendors showcasing TV apps.
These things should be important to P3 Update readers because, on the content creation side, so-called “consumer products” become increasingly powerful and sophisticated, often making them viable professional tools. And on the playback side of the equation, these devices and services are how consumers, the ultimate end users of the content we create, will be consuming that content. And in many cases, new products like the TV apps present opportunities for new content creation and distribution.
What can I say? It’s a Gizmo Guy’s paradise! You’ll read much more about what I see at CES in just a few weeks. Meanwhile, I need to finish my exhibitor “hit list” and get ready for what promises to be a great New Year’s Eve celebration.
There’s no question that digital technology is dramatically transforming our industry. Several major camera companies have announced that they’re no longer making 35mm cameras; Eastman Kodak is teetering on bankruptcy; around the world, cinemas are making the transition from film to digital projection and studio heads quiver in fear of growing digital piracy.
Yes, we’re very much in transition. But that also means ground floor opportunities for those artists and entrepreneurs savvy enough to take advantage.
Digital technology now makes it possible for filmmakers with even microscopic budgets to achieve production values and a level of quality previously available only to those near the top of the production food chain.
Digital distribution, both theatrically and into homes now makes it more economically viable for indie filmmakers to get their product seen just about anywhere and everywhere. It’s becoming increasingly common for consumers to watch entertainment content not only in theatres and on their TVs, but also on their computers, laptops, tablets, portable media players and phones. With portable devices like LCD-equipped headsets and pico projectors, it’s now possible to literally to enjoy a big screen experience just about anywhere.
Interestingly enough, thanks to outlets like YouTube, the short subject is making a strong comeback, giving novice and experienced filmmakers alike a venue to hone the craft of succinct visual storytelling. Sometimes the results frankly suck. But once in a while, there’s a real gem that can come from anywhere.
But with all this change, I also want to point out that there are some things that will never change. For example, even with all the new delivery options, I don’t see movie theatres going away any time soon. There’s something about that communal experience of seeing a compelling story unfold on a 60 foot wide, 25 foot high screen with hundreds of like-minded audience members that can never be replicated by even the most expensive, state of the art home theatre system. It’s an almost primal human desire to share dramatic experiences and, even with admission hikes and pricey concessions, a night at the movies is still one of the most affordable types of entertainment.
The other thing that remains constant is the need for what my friend Susan Johnston, who runs the New Media Film Festival, calls “great stories, well told.” Storytelling is part of our DNA. No matter what delivery or presentation medium, there will always be high demand for well-crafted scripts featuring great stories and compelling characters.
Hollywood, like so many other industries, goes through cycles of change that doomsayers predict will “kill the industry.” It happened in the late 1920s with synchronized sound, in the 1950s with television, 20 some years later with the introduction of the VCR and now the digital age. Every time, those supposedly “industry killer” technologies ultimately became big money makers for the industry and the source of untold jobs once folks figured out how to take advantage of those technologies. And each time, the people and companies who succeeded were the ones who found ways to embrace these new technologies to deliver well told stories that matter with characters we care about. That will never change, even when the technology does.
Happy New Year everyone!