Yet another TED talk on creativity. Very enjoyable presentation.
A Georgia Tech Class Blog
Yet another TED talk on creativity. Very enjoyable presentation.
Capturing emotion in virtual characters is a challenge. It limits expressional creativity in virtual productions–games, 3D animation movies, and the lot. This post is the last of the three part series that introduces some of the recent evolution in motion capture technology, or more accurately performance capture technology.
Last time we looked at L.A.Noire and the technology behind its facial expression capture so detailed that you can read liar’s tell from a virtual character.
This time, Quantic Dream strikes again–with another tech demo featured in E3 2K12. Unlike “The Casting” this one is not exactly a demo for a game under production. The company simply wanted to push the envelope of convincing performance in virtual characters–and they succeeded. Without further ado, the awe-inspiring video.
This video is actually more interesting and engaging than some full length movies these days. Within this one short video the character “Kara” goes through many different levels of emotional stages. At first she is very robotic and artificial. Within minutes, however, she transforms. The audience can sense her delight and enthusiasm and genuine happiness as she is “brought to life”. Then her emotional state changes yet again due to what the man decides to do (not going to go into detail as to not spoil). Then again as the conflict resolves you can again sense a different emotion from her expressions. If you are not convinced that the emotion is coming from the facial expression but rather the voice acting, try watching the video again without audio.
Quantic accomplished this masterpiece using the same technology they used for “The Casting” demo–Motion Capture. However, there is one major difference–number of takes. Quantic felt that using two separate session for each body motion capture and facial motion capture (with voice) really didn’t bring out the best of the performance. Sometimes the acting would feel disjoint. It was more difficult for the performers too because they had to be conscious of their other acting. Quantic invested heavily to build a studio that is capable of capturing the facial and body motion capture at the same time. This allows the performers to “perform” naturally as they normally would for theater. With this advancement, the video games have come even closer to replicating the fine delicacies of a quality performance in the most natural way possible.
Capturing emotion in virtual characters is a challenge. It limits expressional creativity in virtual productions–games, 3D animation movies, and the lot. This post is the second of the three part series that introduces some of the recent evolution in motion capture technology, or more accurately performance capture technology.
Last time we looked at what Quantic Dream was able to use motion capturing to bring an actress’s performance into the virtual world. In this post, we will look at what technology Rockstar Games used to bring the performance capturing to a new level. Following is a promotional video Rockstar created to explain their technology used in the game L.A.Noire to capture the details of facial expression to add a completely new dimension to detective games.
The technology is called MotionScan. It uses numerous cameras from many different angle capture the performer’s face to generate the mesh AND texture to most accurately capture every little detail of the performance. Because the 3D models look just like the actors, it really is the actors acting in the game. The detail of the facial expression is so great that you, as the player, are expected to read the suspect’s facial expression to tell if they are lying or telling the truth. This method is a step up from motion capture in a sense that instead of using a set number of reference points, it uses the entire face to capture the performance. It is much more accurate. However, it is not without trade-off. MotionScan accurately “scans” the actor/actresses faces–as is. The technology cannot be used to “map” an actor’s performance to a different looking virtual character (like in Avatar where the actor’s performance was puppeteering the virtual character’s expression.).
Nevertheless, the detail shown from this technology is proof enough that Video Games can be as compelling and convincing of a story telling medium as movies can be.
Ad for Samsung’s Galaxy SII smartphone launch in France. Featuring Internet Finger Tutting phenomenon–JayFunk.
Adding after effect to enhance finger tutting enjoyment. Makes you rethink potentials of augmented reality in a performance context.
HOK and his buddies doing some cool tutting using apple devices.
I couldn’t agree more to his view. Technology is limiting our creativity. Interaction should drive design and technological evolution, not the other way around. Technology should evolve to enhance human experience, not humans adapt to available technology. We’re not E.T.–which is what Steven Spielberg envisioned as the evolved form of humans (short legs for not having to walk, big head and eyes due to computer use, long fingers for pressing keys, etc).
Not the most readable article. Try using Readability (www.readability.com).
or if you couldn’t be bothered below is the actual article copy-pasta’ed.
Capturing emotion in virtual characters is a challenge. It limits expressional creativity in virtual productions–games, 3D animation movies, and the lot. This post is the first of the three part series that introduces some of the recent evolution in motion capture technology, or more accurately performance capture technology.
The following is a video called “The Casting”, featured in E3 ’06 as a tech demo for the engine and technology used for the game “Heavy Rain” (which is also a great piece to look at if you’re interested in creative and interactive story telling for games). The demo was done in real time 3D. First watch the video (in HD).
The technology they used for capturing the performance (which is used often in current motion capture industry) is a two part step–face with voice and body. This is a very similar technology used for the James Cameron movie “Avatar” (except the difference is that Avatar was pre-rendered scenes where as heavy rain is real time.)
As visible in the following video, the facial capture is done with the voice (lip sync is very important in convincing facial performance). The motion capture for body action is done separately in a big studio. The facial capture is done by using markers on the actor’s face and mapping that to the 3D model.
This is the most common method of performance capture done these days. Next we’ll look more into some improvements made in the last few years.
Video of Japanese Musician Hifana performing live using modified Nike Free Run+
The shoes have accelerometers and strain sensors.
Also the Making of Video explaining what amazing things they did to this shoe to make this happen.
Great design is a never-ending journey of discovery — for which it helps to pack a healthy sense of humor. Sociologist and surfer-turned-designer David Carson walks through a gorgeous (and often quite funny) slide deck of his work and found images.
For those who missed this great TED talk. Loads of humor too.