The high degree of realism we build into the immersive experiences we create for clients is central to our offering and what sets MXTreality apart. But not a business to rest on its collective laurels, we continue to invest in technology that improves our work and allows us to create even higher precision animations.
Which is why a number of the team spent a week researching the best ways to introduce motion capture technology into our workflow and identified a full body suit that would fit the bill perfectly.
Motion capture or match making is widely used in entertainment, sports and computer vision validation. During our R&D process, we discovered that there were many advantages in using this technology in comparison to traditional hands-over keyframe animations.
Obvious advantages
Firstly, let’s look at some of the advantages:
- We could gather real-time results with near-zero latency using a wireless connection. This means that we could connect our suit to any laptop through a repeater and mobilise the equipment anywhere to shoot and record our animations.
- The process of recording complex animations reduces the time and cost of traditional keyframe-based animations. The time spent to make a character animation look realistic would be unnecessary as the suit is capable of recording complex movements and realistic physical interactions in a physically accurate manner.
- The technology allows directors and actors to test different styles and approaches with significant higher freedom of control.
- It can be used and integrated with most of the popular 3D modelling and animation software, as well as game engines.
Although the technology may also present a few disadvantages, such as necessary software and hardware to produce and process data, or the requirements of space with no magnetic distortion, the advantages far outweigh its drawbacks.
Practical implementation
With some of the potential disadvantages identified, we recognised one of the motion capture suits we were considering addressed many of them.
Xsens MVN gives full freedom of movement because it does not use cameras to track the position of the actor, but instead sends location signals from each sensor attached to the suit, directly to the software for processing. It is a portable and flexible system that’s ideal for use indoors and outdoors.
During our first tests, we found out that setting up each actor requires approximate measurements of the body and correct placement of the moveable sensors, which can take several minutes when changing from one actor to the next.
In the next phase, the calibration process of the body to sync with the software is very straightforward and should not take more than 20 seconds.
Every recorded take is automatically reprocessed and we found that the normal fast reprocessing gives some regular little spikes in some animation’s curves. We are still not sure what is the cause but it could be related to some signal interference due to testing space.
It seems however that the reprocessing also takes care of those spikes as they disappear and the final quality of the exported file is more than satisfying, with 240 keyframes per second.
Having invested in the Xsens technology, we are excited at the potential it brings to our work for our current and future projects. Check back for regular updates and more samples of our animated characters, captured in the studio and elsewhere.