Ray-tracing, a computer graphics technique that has long been far too complex to do on consumer hardware, is making leaps and bounds over current 3D rendering technology.
It renders much more realistic scenes at scale by simulating the way light truly behaves. It was a hot topic at last week's E3 in LA, and several hardware companies such as AMD and NVIDIA have announced that their future graphics cards will incorporate ray-tracing features. Cloud companies are also announcing projects that rely more on this technology. So what is cloud-based ray-tracing, and how does it really work?
The current industry standard of 3D light rendering, rasterization, is an extensive process that ‘tricks’ our brain to see a 3D image by using a variety of approximations for how materials reflect light in real life. Each pixel has to be individually assigned with colour and shading, and it’s only at this point that it can fit into the puzzle of the virtual object you want to fully render.
Using path-tracing, a common way to render in 3D, ray 'bounces' are simulated as they are emitted from light sources and hit different surfaces. This is based on the reflectiveness of objects and the amount of light they diffuse, which will scatter light in different ways.
It can be complex, as it involves working out where the intersection points of these bounces of light are with multiple material properties. It is computationally heavy, time-intensive and costly. Nevertheless, the outcome is exponentially better, as light is modelled and executed much more accurately (although still an approximation) versus more speculative rasterized graphics and as a result scenes can look photorealistic.
At present, ray-tracing is limited by traditional single-computer technology which means that every scene in a game is rendered for each person individually. This is not economical or performant and would be far too slow for realtime gaming, aside from every gamer needing to own the latest, most expensive hardware.
When ray-tracing is distributed across multiple machines on the cloud, server side and thus tapping into massive resources, it can deliver the full blown capacity of ray-tracing rendering in realtime. The constraints studios face when building games are lifted. No longer must they navigate the restrictions of various hardware, making sacrifices when executing their gameplay vision, forced to deliver a compromised version of the original concept.
With cloud-based ray-tracing, game worlds become more complex than what has previously been conceived and graphics look as good as a photo. Ultimately, the end product of this technology is far better than single-computer ray-tracing, providing superior hardware that only the cloud can deliver. Not only does it give game designers the chance to realise their full creative vision, it also gives players games that previously they could only have dreamed of experiencing.
This superior hardware is why all big gaming publishers, hardware manufacturers, engine developers and cloud companies are pushing gaming onto the cloud and incorporating some form of ray-tracing.
Most notably, Microsoft announced Project Scarlett at E3, a high-end console with 8K and ray-tracing features that will utilise its new xCloud game streaming platform. NVIDIA also has a new RTX graphics card that can add raytracing to games, where most of the rendering is done using a raster rendering engine.
But fully-fledged ray-tracing is still too slow and too uneconomical for a single-computer. By moving to the cloud and building specialised and more powerful cloud hardware, consoles can keep up with the trend for bigger, better and more beautiful games, finally competing with the fidelity that PC games are known for.
There are a few technical challenges that go hand-in-hand with ray-tracing, with the two biggest issues being latency and performance concerns. This way of rendering isn’t economical as every time a scene is rendered, it is done so for each viewer. On top of this, ray-tracing produces a wealth of redundant information when rendering an entire area, regardless of the viewing angle.
Imagine rendering one area in a game for 1,000 players, 1,000 times over. As you can imagine, this is extremely time-intensive and expensive. However, this is where cloud-based ray-tracing outperforms other ray-tracing methods. Aspects can be rendered once for an unlimited amount of people as ray-tracing becomes shared across many players, making it a highly cost-effective option.
People tend to have one console or gaming machine. With our distributed simulation engine, Aether Engine, we’re looking at what can be done beyond what a single console can do. Aether Engine doesn’t just move parts of a game to the cloud, as most other gaming streaming platforms do. Aether Engine explores how you make use of the availability of hardware on it.
Not only can you move the game to the cloud, enabling studios to develop previously unbuildable games and render without traditional restrictions, but you can also have hundreds of thousands of players interacting with each other in your game - check out our Aether Wars 10,000 player battle for a taste of gameplay at this massive scale!
This type of distributed cloud technology that enables simultaneous ray-tracing, development and gameplay is made possible through Aether Engine. Having already moved the entire game simulation to the cloud, we are now positioned to take advantage of everything cloud-based simulation entails, including fully rendered ray-tracing and realtime multi-play at scale.