New techniques give filmmakers greater flexibility

Robert Zemeckis doesn’t seem to mind, but many helmers simply won’t make the leap to performance capture, at least not as long as it entails surrendering 100 years of filmmaking tradition to work on a fixed concrete stage (or “volume”) populated by leotard-clad actors bedecked with dozens of pingpong ball-sized reflectors (or “markers”). But that may change as newer, less-obtrusive options are emerging.

“I’ve always thought the days of wetsuits and funny markers were going to be numbered, and because of the speed of computer technology, it’s happening a lot faster than we imagined,” says Demian Gordon, motion-capture supervisor for Sony Pictures Imageworks.

Out of the soundstage and onto the set

The familiar “Beowulf” style of operating in a closed, lablike environment with an array of infrared cameras bolted to the walls may make sense when creating a fully computer-animated feature, but it simply isn’t practical when trying to integrate motion-capture performances with live-action footage.

That’s why ILM developed a proprietary iMoCap system flexible enough to operate on-set, under first-unit lighting, for the “Pirates of the Caribbean” sequels. “One of our goals is not to encumber the filmmakers with a lot of effects apparatus or constraints,” ILM animation supervisor Hal Hickel explains.

The new optical system was flexible enough that two tripod-mounted, prosumer-grade HD video cameras placed anywhere with a clear view of actor Bill Nighy could capture his physical performance as Davy Jones. “The data is not as good as what you get on a motion-capture stage, but the tradeoff is that it’s robust, it’s lightweight, and we can take it anywhere,” Hinkel says. The character’s face was still animated by hand.

Away with cumbersome markers

While elaborate indoor infrared camera systems remain the most reliable way to record mo-cap data, those bulky reflective markers take a long time to apply, can skew the data if placed incorrectly and often slip or fall off.

San Francisco-based Mova found an alternative in the markerless Contour Reality Capture system designed specifically for faces and cloth. By applying phosphorescent makeup to a performer’s face or to special glow-in-the-dark dyes to costumes and rapidly strobing blacklights just beyond the threshold of human perception, multiple cameras record facial movements during the tiny interval when the lights are off.

“Contour is almost to the point where we can show the faces live as they’re being captured,” says founder Steve Perlman. But the system does have its limits: “We need a situation in a room where we can control the lighting.”

Single- and no-camera solutions

Another company, Image Metrics, eschews makeup and markers altogether, using machine-vision technology to recognize faces and translate even their subtlest movements to an animated model. The company’s proprietary software can analyze any performance where the face is visible, but it traditionally works from a single helmet-mounted standard-def camera.

“In theory, you need multiple cameras to get to any 3-D position, but if you know what it is you’re looking at, then you can get 3-D from a single camera,” Imageworks’ Gordon explains. And the way things are headed, cameras themselves may not even be necessary: “Any system where you can measure time of flight is potentially a motion-capture technology,” he says, opening the door for possible inertial, magnetic and radio-wave approaches.

Introducing virtual vision

Even as capture methods get more sophisticated, the challenge for filmmakers remains how to direct the performances themselves. “Unless you have the ability to see it in real time, you don’t make the choices you make on the live set, so the end result doesn’t look as natural as what you’re used to seeing on a normal film,” says visual effects guru Rob Legato, who served as a motion-capture consultant on James Cameron’s forthcoming “Avatar.”

Cameron himself admits: “It just looks like a bunch of people standing around in wetsuits in a big white volume, so creatively, it’s very daunting.” Because the actors were powering 7-foot alien creatures and interacting with elements at different scales, Legato helped develop a SimulCam system that allowed Cameron to visualize a low-res version of the digital creatures and environments. Guided by the virtual performances he could see in his viewfinder, “I can use the cameras to block the actors, so I always know my shots are going to work later,” the director says.

Filed Under:

Follow @Variety on Twitter for breaking news, reviews and more
Post A Comment 0