The lockdown continues, but appears to be having little impact on the level of interest in the immersive environments we build and even less impact on the team’s ability to deliver what is needed to keep our clients working. Stay safe out there and read about our week.

Toby – Leading From Home

There are rumours that this lockdown will shortly be gently lifted so my thoughts are back to the projects that were due to go live back in March. One of those projects is our Innovation Centre.

It feels like we’ve been talking about it forever (in reality about 18 months) but March 23rd was the key date for installation of equipment and, for obvious reasons, that got pushed out (quite) a bit.

But here we are talking again about installing our tracking system and sound system to finally, finally, get this off the ground. Stay tuned.

Alongside this we need to think about what we’ll do in the event of the lockdown being released. Given that it isn’t critical for us to be in the office all the time I’ll largely be driven by the feelings of the team and certainly won’t be insisting anyone travels in each day.

Josh – Programme Manager

This week I have been juggling the completion of projects, managing ongoing work and specifying new project proposals as our specialist skills take realistic immersive experiences into new sectors, beyond our stream of activity with Highways England. More from me next week, especially if I learn how to dictate my thoughts accurately, rather than type them up ‘hunt & peck’ style!

Cat – Lead Programmer

This week at MXT we’re working on the first solution you might actually term a ‘game’. In it, the player drives down a busy motorway while trying to avoid traffic, follow instruction from road signs and cover as much distance as possible – without racing!

A novelty element of this project (at least, for our team) in comparison to our Traffic Officer VR solution is that it’s not an immersive VR experience. The game is played on a normal screen, which for us in development, means we have a much higher budget to work with in terms of computing resources, even for the same machine as a VR solution.

As a ballpark figure, rendering a scene in virtual reality is about three times more expensive than doing so on a 2D screen, thanks to differences in framerate and resolution. So, it stands to reason that we think we can make this one look really good!

In our initial design meetings, Jordi and I identified the two most complex elements of the game – the road you drive down, and the traffic you drive through.

Both aspects will be drawing on our previous experience, with the road based on the one in Traffic Officer VR and the traffic on a week-long experiment we did to access Unity HDRP and DOTS as technologies to make use of in the near future. Jordi’s taken the road, and I’m to do the traffic, so let’s get into it!

Unity’s DOTS (Data Oriented Technology Stack) is a collection of technologies currently in development by Unity to eke out more performance from modern processors.

This is achieved by splitting workloads across all available cores instead of one. Traditionally, most game code is single-threaded as it’s much simpler to develop for because complex logic often has many interdependencies that need to be resolved in order to function correctly.

This presents an issue for processing across several cores, as this can only be done effectively when the problem can be divided into many independent smaller problems.

A particular example of this trickery in the context of the traffic system is how vehicles are informed about their local environment. The naive approach would be to check how far away other vehicles are, and take actions based on the closest ones.

For example, if the car ahead is less than two car lengths away, an action should be taken to slow down. This approach immediately throws up an issue as the only way to access this information is to exhaustively iterate through all the vehicles on the road.

This is a red flag at the best of times, but in our case there may be many hundreds of vehicles on the road at a time, meaning the runtime of the operation is O(N^2), where N is the number of vehicles in the system. This is unacceptably slow, as every vehicle we add to the system will have an increasingly adverse effect on performance.

My solution is somewhat inspired by how real traffic works, in that drivers don’t know exactly how far away other cars are and they can’t see more than a few cars in any direction. By splitting each lane into cells, I created a rough approximation of traffic density across the entire traffic system.

Then when vehicles need to make decisions about whether to change speed or change lane – not yet implemented – they can consult a few specific cells instead of looking up every car.

All vehicles are now working on the same approximation of their environment and they can do so simultaneously without worrying about the others, so this whole operation can now be ‘jobified’ and split across every core available.

Next week, pathfinding and implementing the player!

Sergio – Programmer

Most of my week involved polishing code for our Virtual Events experience on the Web and trying alternative ways to get Unity stream videos from different sources.

It turned out to be tricky as Unity requires an actual location of the file, which by itself can be dangerous due to security reasons and could be a problem for the companies hosting those files (YouTube, Daily Motion, Vimeo), due to their terms of agreement and privacy policies.

Those platforms confuse the content with custom links which neither Unity nor we can use. Testing those videos on our hosting platform should solve this problem, which is a part of what I will be doing next week while waiting for finalised artwork to work with.

Slava – Lead 3D Artist

This week I was working on improving the traffic officer vehicle for Highways England. We are using a model that was originally optimised automatically by tools popular across all modelling software as well as in Unity itself.

While making it easy to simplify model mesh quickly by just one click and make LODs, they generally give poor results in terms of topology, because in most of the cases they weld vertices.

It may give acceptable results for organic and relatively simple meshes, but with complex models, such as a high-polygonal car, which often consists of different smaller objects, they give bad results, which may look good whilst possessing inefficient topology.

Issues may involve overlapping surfaces, broken geometry and excessive number of polygons. This topology leads to a performance drop and rendering issues – it is also almost impossible to edit such mesh.

I was undertaking retopology (the process of converting high-resolution models into something much smaller that can be used for animation) of such an ‘optimised’ mesh to make it usable in our projects – and make it possible to edit it in future.

This was quite a tricky task, mainly because of the complexity of the original high-polygonal model made from CAD. For some details I used Quad Remesher plugin for 3ds Max, which gives a good basis for further optimisation to reduce the number of excessive polygons. For other details I did the retopology manually using the new powerful tool in 3ds Max.

Eventually, I managed to reduce the number of polygons from more than 200k to less than 50k without losing detail. Then I unwrapped UV texture in order to bake original texture into new mesh, which produced a great result.

Stefano – 3D Artist

This week my mission was to prove one person, all alone, could use the Xsens suit; essential given the extreme working conditions of the lockdown. The entire process took me only 3 days, which is actually less than the first time we used it as a team! Impressive, if we consider that this time I worked alone.

The biggest issues to solve were how to start playing the voiceover, start recording the motion capture, whilst knowing exactly when to start acting and trying to be perfectly synchronized with the voice.

I decided to add two countdowns at the beginning of every audio clip, to give me time to get everything to start up and prepare me to act.

The procedure could be described like:

  • Click to start playing the audio
  • Three, two, one
  • click record on the mocap software
  • three two one
  • start acting

Doing this, I knew I had to cut the exact same amount of time at the beginning of every take, as well as at the beginning of the audio clip, which made it much faster to clean and synchronize files!

To have even more control over my acting, I kept the voiceover sound wave on one screen to have a visual feedback of the voice timing, as explained in figure.

The recordings took me just one morning and I recorded a minimum of three takes for each scene.

After the recordings I worked through the takes selection, the reprocessing, exporting and Maya scenes setup like the first time, but this time I was able to work in a much more optimized and quicker way, although I had more clips to deal with.

Kyung-Min – 3D Generalist

With yet another bank holiday to cut the week short, we were left with only 4 days in production, which give our live projects makes for a heavy daily workload.

The evolution of our new pipeline is bearing fruit through the numerous tests we threw at it over the week. With the majority of the programming logic finished I was left to create the finalised assets that will be baked into the project to produce the final build.

Having my Block Outs come to life with Sergio’s code was an amazing experience, to see the logic take shape and manipulate my art assets as planned so early on. Now I am working on finalised assets and with our new pipeline will be able to just swap them in and bake for our final build.

My art does not interfere with his code and his coding structure does not affect my art mergers, just as the new pipeline was intended to do!

I also spent time considering the role of exhibitions, trade shows and live events that have always been a part of every industry sector, from teaching and healthcare to construction and recruitment.

If this pandemic has proved anything it’s that these events with their standard format involving thousands of people travelling to a specific location, are not so efficient or ideal in all cases. It’s time for alternatives or evolution of the accepted format, which I cover in this blog (link to events blog).

Jordi Caballol – Programmer

This week we are starting a new project that sounds like real fun: a simple game! This game is meant to teach drivers how to behave on a smart motorway, as well as how to react to your car breaking down.

The game will consist of an infinite road like in our infinite runner, but with more intelligent traffic, varying speed limits and other interesting things. Also we will use this project to test new technologies.

I’m starting with the road, reusing what we have for the infinite runner from our ‘Eyes and Ears’ project, but adapting it to the peculiarities of the game and to Unity’s high fidelity rendering system, the HDRP.

It’s not a very visual work for the moment, but it will set up the foundation of the entire project, so more of that next week.