01 July 2007

Machinima in 3-D


Recently, my curiosity was awakened by the latest generation of machinima; animation created using games engines, their cameras and their characters in combination with original dialogue and music. An interesting medium that is closer to puppeteering than animation, depending on the amount of post-production employed. Because the more post-production, including animation refinement and image manipulation is employed, the more it becomes what animation is: fully controlled, frame-by-frame film.

Right now machnima all have clumsy elements that detract from the enjoyment of their end result and their classification as serious competition to the art of animation. Mostly, the preset animation loops of characters in games are game-specific and to tell a narrative story, unfit animation has to be used regularly. Also, when dialogue is involved, lipsync is almost always off, with generic open-close positions to the mouth. Facial expressions are equally generic with smile/frown/surprise being the basic range of emotions available.


Only the strong survice - by Riot Films


However, machinima’s saving grace is that a good story is always gripping, no matter how clumsy the animation or low-poly the rendering is. We are humans and love to see human interaction (or humanoid creatures, humanoid objects or otherwise anthropomorphic character dialogue).

So where does the stereo 3-D come into play? Well, being computer-generated, rendering machinima in 3-D is relatively easy to do. Nothing new there, but there’s one more element to this new medium that makes it very interesting indeed in terms of 3-D transmission. I am talking about the way games transmit their animation data online. Basically, because computer games work with client-side rendering, based on a collection of textures, polygons, character rigs and image rendering rules, only the character rig animation control points’ coordinates, the environment’s parameters and the camera’s position and settings need to be transmitted over the internet to create a machinima. This, and of course the original dialogue and music & effects tracks. So not much more than the amount of data transmitted in a high datarate audio stream, and definitely much less than today’s standard Mpeg-2 (satellite, digital antenna and DVD) datastream standard.



Recording wise, actors and presenters are only motion-captured and scanned for their face’s, body’s and clothes’ textures and their dialogue is recorded for the audio stream.

What we are looking at is a possible future of television. It is a different, inverse way of thinking compared to how we watch TV now, because this is active client-side rendering in combination with a small datastream, rather than our current passive, bandwidth-consuming image signal reception.
The most obvious benefit is the endless scalability, as seen in streamed Flash animation, so a HD or SHD end result (there is talk of 8K broadcasting in Japan), is no longer a problem in relation to available bandwidth. No need for a nationwide fibre-optic network. Only a need for very powerful computers on the receiving viewer’s side to render all the imagery in real time, or with a slight delay. And computers are becoming more powerful at a higher rate than the implementation of fibre-optic networks.

Of course, the real issue with this way of doing TV is the need for items like textures, rig configuration, polygon models etc. to be sent ahead of each channel’s program. And that bit of transmission can quickly go up to hundreds of megabytes, which, at Mpeg-2 data speeds, can take anywhere between 30 seconds to 3 minutes. Not very realistic then, unless a fixed preset of textures and models is used for every program. And that would be a bit samey and Big Brother-esque…

We keep on thinking about it, and in the mean time, enjoy another episode of Red vs. Blue

DiggIt!Del.icio.usBlinklistYahooFurl TechnoratiSimpySpurlRedditStumble UponAdd AnyWindows Live


Share/Bookmark

3D Stereoscopic Film and Animation Blog