DeepLoco enables animated characters to teach themselves to walk

Think of the most recent Disney movie you’ve been to. Remember how vividly water splashes, a snowball rolls or cloth wrinkles? These animated objects look so lifelike because they are simulated using realistic physical laws.

Human motion, though, remains a hard puzzle for animators. It is challenging because scientists still don’t understand exactly how humans control their movement to adapt to the environment, despite how easy it looks.

Physics-based modeling is now the predominant way of animating motion in films and games. A physics-based model is good at receiving forces on the joints of an animated character as input, and producing how the character looks under the laws of mechanics as output.

But most of the time, artists have a certain output in mind — how an animated arm swings after a high-five — and don’t want to sit around calculating the input — how much force is needed to make the arm swing.

The traditional way to get around this problem is a process called motion capture: the actions of real actors playing a character are recorded and transformed into animation.

There are obvious problems with motion capture. What if the soccer ball in the film needs to come in a little further to the right than was first captured? What if a character is bumping into another?

To solve these problems, animators have proposed to use machine learning, letting animated characters learn how to move by themselves.

UBC computer science professor Dr. Michiel van de Panne and his team have recently developed a machine learning algorithm called DeepLoco, which an animated character can use to learn to perform a variety of walking-related tasks with a natural appearance.

DeepLoco is built on the idea of a controller. A controller takes in the desired state for the character and the current state, and decides on an action as the output. The action is then fed into a physics-based simulator so that the character walks as instructed.

Basically, a controller like DeepLoco is kind of like a brain or neural system for the animated character.

How did van de Panne’s group use the controller to make a character learn to walk?

First, the character begins walking simply with some guesses. If the character does something correctly, say by stepping over an obstacle, the controller is rewarded. In other words, the next time the character is in a similar situation, the same action is more likely to be selected.

Once in a while, the controller randomly selects a less favourable action just to explore possible better solutions. After many iterations like this, the controller tends to stop making weird guesses. The character knows how to better cope with the environment each time it makes a step. That is, the character has “learned” through this trial-and-error process and gained “experience.”

Once the character has a good knowledge of how to walk, the random exploration is turned off.

Inspired by humans, the researchers have designed two different controllers: a high-level one and a low-level one.

When we are walking, our brain keeps us on path by telling us to maintain a certain posture. On a subconscious level, our muscles execute the target gesture by providing certain forces to our joints. Muscles can adjust very quickly — many times a second.

The high-level controller mimics our brain, while the low-level controller mimics our subconscious command of the muscles. The cooperation of the two controllers is one of the key advances of DeepLoco, which enables the character to achieve relatively complex tasks.

Currently, van de Panne’s character is able to navigate through a narrow zigzagging path, run on conveyor belts or ice, climb up and down a gentle slope and dribble a soccer ball. It can even restore its motion after being hit by flying bricks.

[Sorry, video not found. You can contact webmaster@ubyssey.ca to fix the issue]

For the next step, the group plans to speed up the learning phase. To gain enough experience for the walking tasks, the character has to train for days, even weeks. The researchers are also looking into combining separately learned skills, such as running and dribbling in soccer.

These computer simulations are useful for producing not only normal-looking human motion, but also motions of various styles. It may even be helpful to understand how a dinosaur walks under the principles of mechanics.  

Machine learning has recently been attracting a lot of attention for its ability to solve complicated problems like motion control. Instead of traditional engineering approaches which focus on explicit mathematical models, the learning approach is more about training the system without knowing the details. At a more philosophical level, it may even reflect how humans and animals learn to control their motion.

“When Roger Federer does his backhand slice … there’s no equation running through his head. [He] just practiced his stroke often enough and he knows how reliable it is,” commented van de Panne.

First online

Submit a complaint Report a correction