Capturing emotion in virtual characters is a challenge. It limits expressional creativity in virtual productions–games, 3D animation movies, and the lot. This post is the first of the three part series that introduces some of the recent evolution in motion capture technology, or more accurately performance capture technology.
The following is a video called “The Casting”, featured in E3 ’06 as a tech demo for the engine and technology used for the game “Heavy Rain” (which is also a great piece to look at if you’re interested in creative and interactive story telling for games). The demo was done in real time 3D. First watch the video (in HD).
The technology they used for capturing the performance (which is used often in current motion capture industry) is a two part step–face with voice and body. This is a very similar technology used for the James Cameron movie “Avatar” (except the difference is that Avatar was pre-rendered scenes where as heavy rain is real time.)
As visible in the following video, the facial capture is done with the voice (lip sync is very important in convincing facial performance). The motion capture for body action is done separately in a big studio. The facial capture is done by using markers on the actor’s face and mapping that to the 3D model.
This is the most common method of performance capture done these days. Next we’ll look more into some improvements made in the last few years.