Once upon a time, somebody took pen to paper and created Felix the Cat, Olive Oyl, Donald Duck, and, going even further back, Fantasmagorie (Emile Cohl’s 1908 animation is commonly identified as the world’s oldest cartoon). Nowadays, animation is a totally different game, with the global animated feature industry (including both film and gaming) projected to have a net worth of US$265 billion in 2019. From a technological perspective, the film and gaming industries have always been complementary entities (see: the pioneering and use of motion capture in both) and we are already seeing widespread utilisation of game engines in major Hollywood pictures.
Epic Games’ Unreal Engine 4 was used to render and composite the droid K-2SO in the final shots of Rogue One: A Star Wars Story, as well as create scenes in Pixar’s Finding Dory. They subsequently modified their engine to natively read Pixar’s USD format for the studio’s future projects. Just last week at Unite Berlin, Unity unveiled their MARS (Mixed and Augmented Reality Studio) platform which offers facial capture capabilities for animators working in the film and gaming industry (as well as designers and graphic artists in other fields). At the same event, Unity unveiled Soba Productions’ Sonder, a animated short film that was made entirely with the Unity engine. As more agile and cost-efficient options than traditional film CGI software, game engines and simulation technology offer increasingly comparable results without as much of a financial or technical burden.
But better tools are just the beginning. As Timoni West said in the Unite keynote, the aim is to create “apps that live in and react to the real world” with “reality [as] the build target.” The likes of Microsoft’s Hololens and Magic Leap are attempting to take sports entertainment to a new level with AR and MR, but simulation technology offers a means of pushing even further beyond these enhanced viewing experiences. Imagine if you could watch a Star Wars movie that was built in a persistent simulated world, and then go home and dive into the exact same digital environment in a connected game - not a cosmetic replication, mind you, but the actual living and breathing world used to build the movie. Now imagine that your actions as a player in this world could shape the setting and narrative of a film sequel. It’s not a great stretch of the imagination to see the value that this could offer a franchise-driven beast like Disney (whose net worth already sits at US$156.3 billion), but this is the kind of transformative change that simulation can unlock in the entertainment industry.
Right now, the bottle neck is computational. Consider that an animated film can occupy as much as 250TB of storage, purely for high-fidelity rendering and animation files. Disney's Big Hero 6 (released in 2014), for example, was rendered on a 55,000 core supercomputer. To add a further layer of interactivity, accounting for AI, dynamic physics, a fluid player population, a service layer, and more, the workload becomes significantly larger (not to mention the burden of ensuring that any game remains performant on a player's hardware).
While the possibility of a shared simulated world that straddles different media is currently impossible with the technology stack typically employed today, it's also not as far-fetched as it may initially appear. The continued evolution of cloud infrastructure and edge technologies means that we'll soon be able to distribute software and algorithms at massive scale which will allow us to achieve these kind of breakthroughs. It'll certainly allow game developers to craft experiences with a level of depth and complexity that we've never before seen before. But why stop there? I think that we're about to see the revolution of the entire entertainment industry.
Hadean is an operating system designed for distribution and scale, its distribution first optimizations allow developers to build, run, and scale real-time applications at hyperscale.