There have been a number of key developments made over the last decade that have intensified the progress made in the field of AI.
Firstly, we’ve seen the likes of IBM, Intel, Google, and Nvidia make significant strides in the development of neuromorphic hardware - specialised electronic circuitry that more closely mimics the neurons of the brain. Rather than traditional computer architectures, which provide software developers with logical, arithmetic, and data manipulation operations, modern AI systems exploit these structures to mimic neuron behaviour for the purpose of AI applications.
This has coincided with the massive progress made in the development of artificial intuition - behavioural characteristics that enable AIs to apply human-like problem solving across a far broader range of problems.
Excitingly, these advancements converge in our current area of focus: the field of simulation. As AI matures, the need to develop and train these systems in increasingly immersive and complex scenarios becomes a primary concern for progress. Knowing that the ultimate goal of the field of AI is to produce an Artificial General Intelligence (AGI), which can outperform humans in all spheres of human endeavour, we need to build a world rich enough, large enough, and sufficiently populated with other complex entities to maximise this development.
We’ve seen a lot of early success with the preliminary deployment of Deep Reinforcement Learning algorithms in game-playing. Deepmind’s AlphaGo Zero has already mastered Go and Chess beyond the capabilities of any human player, revealing strategic insights that humans have never before considered. For example, rather than adhering to the Reinfield values that typically govern human play (where chess pieces are scribed specific values: pawn=1, knight=3, queen=9, etc), AlphaGo Zero often optimises for better board position with virtually utter disregard for piece value. Interestingly, this stems from its ability to teach itself rather than observe human play and thus avoid contamination from human biases. Yet in doing so, AlphaGo Zero’s resulting performance is more human-like than any other chess engine. It relies more strongly on heuristics or intuition, needing to look at fewer board positions to arrive at clever strategies.
These achievements are a significant preliminary result, achieved only by DeepMind pairing its Deep Learning techniques with simulation technology. But the ‘worlds’ that this performance exists in are still very much constrained. Stuart Russell and Peter Norvig, the authors of Artificial Intelligence: A Modern Approach, describe a range of characteristics of the world an AI must perform in. Four of these are of particular importance:
Games like Chess and Go are fully observable, deterministic, static, and discrete - the kind of environment that is the easiest to tackle for AI. The next challenge for AI will be to win against humans in environments that are partially observable, stochastic, dynamic, and continuous.
Encouragingly, we’re beginning to see this happen with Elon Musk’s OpenAI playing the Multiplayer Online Battle Arena (MOBA) game Dota 2. MOBAs themselves are subset of the Real-Time Strategy (RTS) genre, which typically require a higher level of general intelligence to balance the individual components of resource management, base building, and offensive and defensive strategies. OpenAI recently managed to beat a human player in Dota 2, albeit under very specific circumstances (using a particular map and with a certain hero character in their arsenal). But this does give some indication as to the game theoretic complexity of deeply strategic AI as we move closer to real-world type scenarios. At the same time, DeepMind and Blizzard are collaborating on an AI testing ground in the RTS game Starcraft II,highlighting the increasing symbiosis between the AI and gaming industries.
The goal of these experiments is to use a virtual world to develop and prep an AI for the real world. Accordingly, what we will ultimately need to produce a sophisticated AGI is a massive simulation world possessing highly complex systems and a population of entities with individual motivations, routines, and responses. Seeing the game changing nature of the insights that AlphaGo Zero gleaned within the confines of a chess board, imagine what such an AI could learn within an accurately modelled city - or in fact, an entire continent or globe?
At Hadean, we’ve made significant strides in developing world-level intelligence atop our distributed simulation engine. We’re extremely excited to see the new capabilities in AI that are becoming possible as simulation technology matures, and to actively build them out with our cutting-edge partners.
HadeanOS is a cloud-first operating system that has been engineered and optimized for performance across massively distributed computing infrastructures. HadeanOS natively understands the dynamic scale and real-time demands of modern applications in the cloud and removes the need for complex operations and engineering.
Call 020 3514 1170 or get in touch using the form