Motion capture gains respect among f/x wizards
After over a decade of a lot of hype and little respect, the academic-sounding animation technique known as motion-capture may finally be ready for primetime.
“Mo-cap,” as many of its proponents are fond of calling it, has been put to use in some high-profile projects in the past two years – this summer’s “Final Fantasy” and “The Mummy Returns,” 1999’s “Star Wars: Episode 1,” the upcoming sequel and the next two installments of “The Matrix.”
As with just about any new digital technology – computer generated images for film, streaming media on the Web or wireless communications – the vastly increased power of computers has transformed mo-cap from niche player to the mainstream. In this case, from a version of cheap rotoscoping to a real-time, high-definition film tool.
“It has become a new form,” says Jeff Light, motion capture supervisor at Industrial Light and Magic. “It won’t replace ‘Toy Story’-type technology or live-action mind you, but motion-capture based projects will become a film genre of its own.”
The technology is a process of capturing the movement of a real person (or animal or object) in a defined performance space and mapping it onto a computer generated image.
Rotoscoping is a decades-old animation technique where artists draw and paint over a filmed human performance frame-by-frame as in Ralph Bakshi’s 1978 “Lord of the Rings.” It is much more expensive and time consuming than mo-cap.
The trade-off so far has been that rotoscoping looked better. That has changed.
Among other things, the tracking hardware and processing software is accurate enough now to capture subtleties of movement not possible five years ago.
“Depending on the power of the computer, one can achieve near-realism with motion-capture,” says Dr. Wayne Carlson, professor of design technology at Ohio State University. The acad’s advanced computing center for arts and design has been at the forefront of mo-cap since its inception.
Former student Light recently returned to his alma mater to capture a performance of legendary French mime artist Marcel Marceau. Now that the original heavy mo-cap suits and wires have been replaced with light-weight wireless gear, mapping 77-year-old Marceau’s moves for posterity was not a problem.
“The equipment is at least five times more accurate than it was in the late 1990s,” Light says. In some cases, depending on the number of cameras used and the volume of performance space (the smaller, the more accurate) that level of precision is increased ten-fold.
It’s not just juggernauts like ILM that are able to deliver high-end product for the film biz.
Two-year-old, Los Angeles-based Spectrum Studios may not exactly be Skywalker Ranch but it is playing with the big boys near Marina del Rey. Among other things, Spectrum is in the middle of generating images and data to be used in “Matrix 2” and “Matrix 3.”
Where once a motion capture stage could only be eight feet by eight feet with four or five cameras, Spectrum’s stage is half the size of a basketball court, ringed with 28 cameras.
Recently company computer boffins and animators managed to capture the performance of 17 actors simultaneously, a feat hitherto impossible.
“Our capabilities have tripled in the past eight months,” says Stephen Brain, Spectrum’s chief operating officer.
At a recent demonstration, animators were able to capture a performance of two martial artists and produce an animated version of it in minutes.
“The road blocks are gone,” says Susan Van Baerle, a Minneapolis-based animation consultant. “Not only has the technology improved drastically so has the skill-level of the people using it.”
That, say the experts, will change yet again come this fall.
With existing systems, the analog signals from its cameras have to be converted to digital files. Santa Rosa-Motion Analysis Corp. plans to unveil a new system that’s digital from image capture to image rendering.
“It will eliminate the need for a lot of costly hardware,” Brain says.