Advertisement

100 years of motion-capture technology

From 'Snow White' to Siren.

Modern motion-capture systems are the product of a century of tinkering, innovation and computational advances. Mocap was born a lifetime before Gollum hit the big screen in The Lord of the Rings, and ages before the Cold War, Vietnam War or World War II. It was 1915, in the midst of the First World War, when animator Max Fleischer developed a technique called rotoscoping and laid the foundation for today's cutting-edge mocap technology.

Rotoscoping was a primitive and time-consuming process, but it was a necessary starting point for the industry. In the rotoscope method, animators stood at a glass-topped desk and traced over a projected live-action film frame-by-frame, copying actors' or animals' actions directly onto a hand-drawn world. The technique produced fluid, lifelike movements that animators couldn't achieve on their own.

The first full-length American film to use rotoscoping was Snow White and the Seven Dwarfs, which debuted in 1939, and Disney used the technique in subsequent films, including Alice in Wonderland, Sleeping Beauty and Peter Pan. Though actual mocap systems were still decades away, rotoscoping was precisely the proof of concept the field needed -- clearly, it paid off to mimic real people's actions as closely as possible in animated spaces.

Two decades later, the United States was caught in the Cold War, racing the Soviet Union to the moon, and animator Lee Harrison III was experimenting with analog circuits and cathode ray tubes. In 1959, Harrison lined a bodysuit with potentiometers (adjustable resistors) and was able to record and animate an actor's movements, in real time, on a CRT. This was a rudimentary rig -- the animated actor was essentially a glowing stick figure -- but it marked the first instance of real-time motion capture.

By the 1980s, animators were using bodysuits lined with active markers and a handful of large cameras to track actors' movements, resulting in digital images with much more detail and precision than Harrison's radioactive line drawings. But even in the 1990s, each mocap-ready camera was roughly the size of a small refrigerator, and animators had to manually assign each marker, in each frame, for every scene. It was nearly as painstaking as rotoscoping.

"We've definitely come a long way, especially because everything is automated now," says Jeffrey Ovadya, sales director at Vicon.

Vicon has been in the mocap business for more than 30 years, established in Oxford, UK, in 1984 (and surely making George Orwell sit straight up in his grave). Think of the company as a one-stop shop for mocap rigs, offering everything from cameras and sensors, down to the actual software that turns all of that data into a digital image, in real-time.

Vicon provided the mocap systems for a handful of blockbuster films, including Titanic, Marvel's Avengers universe, Paddington and Ready Player One, and games including Life is Strange and Hellblade: Senua's Sacrifice.

That last title holds a special place in mocap history. Hellblade is a powerful, award-winning action game from independent developer Ninja Theory, and in 2016, it served as an introduction to the modern world of motion capture. Ninja Theory partnered with Epic Games, Vicon and a handful of other companies to create a live mocap demo that showed off the accessibility and real-time fidelity of the latest technology, and they took it on tour. At Siggraph and the Game Developers' Conference, the studios demonstrated how their technology worked and how it was finally available at a reasonable price for smaller teams, not just billion-dollar AAA powerhouses.

"Hellblade was phenomenal," Ovadya says. "That real-time live demo that they did actually at Siggraph was unbelievable; they did one at GDC too. But, back then, there was a lot more that went into the production behind the scenes to get everything prepared to get to the point where it was real-time."

While the Hellblade demo was impressive, it was far more choreographed than the initial presentation made it seem. The actress had a set series of motions to enact and developers had to push the software to ensure it would be able to keep up in a live setting.

"Fast-forward two years later, and computing power has doubled. GPU power has doubled," Ovadya says. "Unreal Engine has added even more capability and even more realism, so all of a sudden, all of the background effort that went into Senua's Sacrifice became just easier, shorter, quicker, and they were able to get to that point faster. So if they were investing the same amount of time for a project, let's just say 100 hours, and the mocap part took 80 for Senua's Sacrifice, for Siren it took an hour."

Siren is Vicon and Epic Games' latest mocap demo, and it's even more impressive than Hellblade. Siren is a digital human powered by a live actress wearing a mocap suit and facial-recognition rig, and the system allows for a level of spontaneity and realism simply not possible with the Hellblade set-up. The actress can improv, walk to new areas of the floor and say whatever she wants, all while a 3D avatar mimics her every move.

"It's really about a real human driving a virtual human and people not being able to tell the difference," Ovadya says.

Better mocap technology means developers can animate scenes in less time than ever, but Ovadya says that doesn't mean production time will be slashed -- it'll just be better spent. For instance, if mocap for the Siren demo takes one hour instead of 80, he argues developers will use that time to make the final product look as lifelike as possible, adding background details and fine-tuning movements in the animated image.

"If you want to make a game now, there's so much of a requirement for complexity, depth and realism that I don't think you could do that kind of a game without motion capture," Ovadya says. "Because motion capture, while it's its own art form, is a tool that tries to make human motion immediately translated and recorded without a whole bunch of postprocessing. To be able to get human motion into a game as quickly as you can, all right, that's one less thing to worry about."

Mocap saw a boom in the 1990s as developers took advantage of multithreading technology, higher processing speeds and the ability to use the GPU as a processor. These advances are still notable in today's mocap rigs and computers are only becoming faster and more powerful -- but another boom is on the horizon.

"Obviously, the holy grail for motion capture for everybody is markerless," Ovadya says. "They don't want to have to worry about fiducials, and as AI and quantum computing come in and they can process things faster and maintain their targeting, then we'll slowly start losing markers, things will come out faster, you wouldn't have to set up as many cameras, you could probably do things in a much larger space at higher speeds. You could just throw a system up in a workplace and just start tracking pretty much everybody. And that's in the five- to 10-year range."

While this is an exciting vision of the future, mocap technology comes with its own set of potential terrors. Even ignoring the obvious Big Brother nature of phrases like, "You could just throw a system up ... and start tracking pretty much everybody," it's clear that people today are already using mocap to mislead and lie. Deepfakes, or pornography that swaps out one actress' face for another, is gaining steam as an industry despite the half-hearted efforts of sites like Pornhub to ban revenge-porn-esque videos. And Jordan Peele, the Oscar-winning director and writer of Get Out, recently demonstrated how easy it is to make it appear as if someone like Barack Obama is saying ridiculous things.

"If there are people that want to use this advancing technology for ill intent, then we will in every possible way disassociate ourselves from them," Ovadya says. "But I think that's just kind of the nature of all technology right now. It's not just motion capture -- any technology is advancing so fast that it can be used to confuse people and to take them in a different direction. Hopefully, people recognize that that's not what this should be used for."

While film and gaming are two of the most consumer-facing uses of Vicon's technology, the bulk of its business is actually in biomechanics, helping track and treat patients with cerebral palsy, injuries affecting their movements and gait, and a broad range of other medical conditions. About half of Vicon's business is dedicated to biomechanics, while a quarter is video games and movies, and the last quarter is robotics, including things like virtual reality and autonomous driving.

All of these industries are feeding each other, and therefore driving progress within the motion capture industry. This is especially true in the relationship between films and video games.

"They're both on par and they try to set different standards for each other and keep pushing each other forward. Games are definitely a big thing, and there's a lot of energy there," Ovadya says. "[Developers] can spend a lot more time animating beautiful landscapes in Horizon Zero Dawn or being able to capture unique athletes for FIFA. All of these things are being done so much faster and so much easier because mocap is just painless."

Images: MovieClips.com (Alice in Wonderland); ILM (Hulk); Vicon (weightlifter); Engadget (Siren); Buzzfeed (Peele)

"Back then, there was a lot more that went into the production behind the scenes."

"The holy grail for motion capture for everybody is markerless."