Technology may enable real-time rendering
Audiences may never sense the difference, but a revolution is afoot in animation, one that could make the creation process faster and more responsive.
On the horizon are rapid, possibly even real-time rendering capabilities, thanks to the advances in simple off-the-shelf graphics cards (GPUs), like those used to power popular home videogame systems. That could spell major changes in an industry where artists must sometimes wait hours to judge how a single frame or sequence will look when rendered.
“It’s like turning the contrast knob on your TV set,” says Bill Villarreal, VP technology for Sony Pictures Imageworks: “If you had to wait three minutes each time to see how the picture looked, at some point you’d lose interest and say, ‘It’s good enough, let’s watch it this way.'”
Faster rendering lets digital animators make more and better adjustments to their work. And though GPUs don’t accelerate every computation, they can potentially offer 128 times the graphics processing power of a general-purpose CPU.
One such manufacturer, Nvidia, offers its own animation rendering software, Gelato, for use on its products. Others working in the field, such as Jules Urbach (of JulesWorld) and Extra Large Technologies (XLT), may work with cardmakers but are entrepreneurial operations.
“We’ve been flying under the radar for a few years now, and we’re just starting to come out,” says Yoni Koenig, a partner at XLT and the company’s head of production. XLT’s product, Mach Studios, is being used for animation production on direct-to-DVD projects by several companies.
Harnessing graphics cards lets artists be more flexible. Changing the camera angle in a CG shot, for example, no longer means returning the shot to the start of the assembly line. As a result, Koenig says, “The economic model shifts drastically.”
Oscar-winning vfx house Rhythm & Hues is using GPUs for some tasks as well. Mark Brown, VP technology at R&H, says that while faster rendering won’t bring projects to the finish line sooner, it can deliver better results.
“If I can show you five iterations instead of one, the more we’re going to be able to work on something and make it look cool,” Brown says.
For motion-capture features, fast rendering lets directors see the actors’ performances applied to digital proxys as they happen.
“The GPUs are designed mainly for 3-D shading in a game,” explains Mova president Steve Perlman. “We were able to turn the chip around and take the output of the Contour system and made it look like a 3-D shading problem. It gets pumped into the Nvidia card, and out comes a real-time capture of the face.”
But for animation production, there are downsides to hardware-based rendering, too.
“All these technologies achieve their performance largely by setting limits to resources that can be used,” says CG supervisor Rene Limberger of Sony Pictures Imageworks. “Even for animation, the scene complexity is probably way above any of these hardware systems’ capability.”
So they’re fast, but they don’t deliver feature-film quality. That means software-based rendering in traditional render farms is still necessary for movies. It may only be a matter of time, though, before that changes.
DreamWorks Animation’s chief technology officer, Ed Leonard, sees a day when the two approaches merge. Today, DWA prefers the flexibility of the general-purpose multicore processors in its render farm.
“GPUs limit your flexibility,” he says. “You’re forced to work the way they want you to work.” But he concedes that jobs tailored to GPUs can be done much faster, “sometimes many orders of magnitude” faster.
Down the road, the multicore CPUs DreamWorks prefers could come with a graphics processor built in.
“That’s where the game-changer happens,” Leonard says. “Then you don’t have to worry about moving your data off to these GPUs.
“That is the future we see quite clearly and which we’re headed for.”