Josh Brolin hardly looked tough shooting his role as super-villain Thanos in Marvel’s “Avengers: Endgame,” dressed as he was in a skintight motion capture bodysuit with multicolored tracking markings, two HD cameras attached to headgear pointing at his dotted face, and a pole sticking up from the back of his vest holding a cardboard cutout of his character’s countenance above his head. But the sibling directing team of Anthony and Joe Russo still acted as if he were a badass.
“Brolin would love it that we would treat Thanos like he was a gangster character,” says younger brother Joe Russo. “We’d use terminology that would be reflective of that and say he’s a psychopath and he wants control, not he’s a giant purple creature who relates to the universe this way, so he could correlate it to a genre and character motivation that he could access.”
Whether it’s superhero epics such as “Aquaman” and “Venom” or period fantasies including “Mary Poppins Returns” and “Christopher Robin,” today’s directors frequently find themselves trying to help actors elicit convincing performances as they talk to tennis balls representing CG characters to be added later — or, in Brolin’s case, actors in goofy motion-capture suits — on soundstages with little or nothing in the way of sets or props.
The Russos, who have directed four Marvel VFX spectaculars beginning with 2014’s “Captain Marvel: Winter Soldier,” say the key to success in this environment is simplicity.
“It’s already abstract working with a green screen and green props that are standing in for digital props,” says Joe Russo. “And if you’re dealing with abstraction and your direction is abstract, I think it tends to turn into mush. So you have to create an emotional life in the simplest way possible that will translate on screen.”
Today, directors typically use real-time rendering on set to monitor how the mocap performances will look post-digital transformation in an approximation of the virtual environments they’ll inhabit in the finished film, which is especially vital on a movie such as director Robert Rodriguez’s “Alita: Battle Angel.”
“Jackie Earle Haley is only 5-foot-5-inches and his character Grewishka is 10 feet tall, so Robert needed to see what it was going to look like in the shot so he could frame the camera for it and get it to look like he would expect it to look,” says Weta Digital’s Eric Saindon, VFX supervisor on “Alita: Battle Angel.”
Shooting “Ready Player One” at Leavesden Studios in England, director Steven Spielberg and his actors took it one step further, using VR goggles to view the mocap renderings and the virtual sets.
“If you’re doing a live-action movie, the actors are inspired by their surroundings, as is the director in terms of how he’s staging the shots,” says ILM’s Roger Guyett, VFX supervisor on “Ready Player One.” “So we built most of the environments beforehand or proxies of them so Steven could set up a location, whether it was the castle or Planet Doom, put on the VR goggles and move around that environment, so he was doing a virtual scout. By doing that, he knew how he could stage some of that action and we could do the same thing with the actors so they could see where they were as a real place.”
Spielberg also made use of unaltered live-action footage of the actors.
“Steven made sure he got very specific views of all the actors’ faces from the camera operators, because he didn’t want to miss any of the nuance that they were putting into the performance,” Guyett says. “So when he was reviewing their performances, not only was he reviewing them in the virtual world, he was reviewing the actual performances of the actors.”
Motion capture has come a long way since actor Andy Serkis made the technology famous with his performance as Gollum in “The Lord of the Rings” trilogy in the early 2000s.
Back then, “it was very rudimentary,” Saindon says. “We used the motion capture to sort of gather the idea of the performance, and then animators would have to go in and key frame his facial animation and redo his body animation. On ‘Avatar,’ we really stepped up a notch and started capturing much more of the performance and the facial and it was more refined to the match actor, but it still required doing key-frame animation and interpreting the data.”
The current state of performance capture technology gives a more direct transfer of an actor’s movements to their digital counterparts. Of course, directors are still free to digitally tweak them in post, but Rodriguez feels it’s best not to.
“When Rosa [Salazar, ‘Alita’ star] kicks a table, her face moves a certain way, synced to her body,” says Rodriguez. “So we tried to not do a lot of key-framing, because the quirks are what make it real. [But] when you’re playing a character that doesn’t resemble you, there’s an adjustment that needs to be done because some things don’t translate one-to-one. Alita’s mouth is smaller than Rosa’s and her eyes are larger.”
On “Avengers: Endgame,” Thanos and his henchman in the Black Order were largely created using motion capture, but characters such as anthropomorphized raccoon Rocket (voiced by Bradley Cooper) relied more on key-frame animation.
On set, Rocket was played for the motion capture cameras by 6-foot-tall Sean Gunn, who “would just curl down as low as he could go,” says “Avengers” VFX supervisor Dan DeLeeuw of ILM. “Then we’d bring in Bradley [to record his dialogue] and we’d have him wear the helmet with the two little cameras on the front. We’re not dotting his face, but it’s still something that’s super-helpful for all the animators.”
Deciding how Rocket and other largely key-frame animated characters such as tree creature Groot would move their faces, bodies and appendages was a collaborative effort involving a group that included the Russo brothers, DeLeeuw and fellow VFX supervisors Russell Earl (ILM) and Kelly Port (Digital Domain) and animation director Jan Philip Cramer (Digital Domain).
“There’s a consortium of generals that get together and talk through style, tone and behavior, then the group communicates to their army of thousands,” says Joe Russo. “We usually have a list of rules about how a character moves, which will derive from stunt players who do movement studies for us where we all go, ‘That’s really interesting, let’s apply that across the board.’ But we’ll often get things back [from the team] that are twice as good as the original concept.”
Generally, they try to follow the laws of physics and rules of movement, “but these are fantastical creatures, so who’s to say what’s right?” says Anthony Russo. “It’s finding the sweet spot between what feels interesting or surprising or unusual and what is also naturalistic and plausible.”
Even when filmmakers use AI crowd creation software, as Spielberg did to populate a virtual world known as the Oasis with hundreds of thousands of inhabitants in “Ready Player One,” there are scores of directorial decisions to be made.
Says Guyett: “You’re genuinely building a world and then putting these characters in it and saying, ‘hey, if you’re a skeletal army from a Ray Harryhausen movie, how do you fight six of them?’ ”