AI: two simple vowels that sit together to represent one of the biggest leaps in technology and the biggest shift to human behaviour that mankind has ever seen. Artificial Intelligence has been written about for the last ten years, and much paranoia as well as sharp debate has been published - just see Futurism’s Artificial Intelligence Is Our Future. But Will It Save Or Destroy Humanity? This well-thought out piece illustrates that it is already too late to turn back and that the exaggerated concern of many may just be part of adjusting to this new technology: “Experts expressed similar concerns about quantum computers, and about lasers and nuclear weapons—applications for that technology can be both harmful and helpful”.
So if we follow this line of argument and assume that all technologies come with some amount of risk, and it’s our role to measure this and help train in the best possible way - then maybe we should spend a bit of time digging into the learning process of Artificial Intelligence. There’s plenty of organisations out there that are looking at harnessing the power of AI to democratise power, distribute wealth and resources fairly and provide open education systems, such as AI for Good, AI for Earth, Future of Life institute, and the list goes on. More AI projects and companies are being setup and funded than ever before, so how are they training their systems and is this working well at the moment?
Let’s take Google’s DeepMind as an example. It was trained using simple games to improve the system’s ability to solve logic-based problems, just as most AI systems are trained using simple data inputs to begin with. The system learns the basic rules of the game and then moves on to increasingly complex games to improve its cognitive ability. The outstanding AlphaGo documentary reveals the background to how the training plays out in real-time and tests AlphaGo out against the world champion of the ancient Chinese board game, Go, which is often cited as the world’s hardest game.
AlphaGo uses deep reinforcement learning, which means it learns only from experience. At the onset of its training, for example, it might play 100,000 games in a row to develop an initial style of play. However, as it plays more human opponents and continues to play against itself, AlphaGo will will begin to incorporate elements of these matches and improve its game along the way. With current engineering constraints AI can only train sequentially like this. It’s still extremely difficult to enable even a very limited parallel capacity - that is, to enable AlphaGo to learn by viewing or playing multiple games at a time.
The benefits of parallel training, however, are massive. It would enable us to exponentially speed up the development of AI capabilities and their applications globally. Rather than complete 100,000 games in a row, imagine if an AI could play all of these matches simultaneously? AlphaGo was already playing Atari video games from the ‘70s like Pong and Space Invaders in 2013, and other AI pioneers like Elon Musk’s OpenAI are now using more sophisticated contemporary video games to train its AIs (in this case, DOTA). The complexity of a real-time strategy game, for instance, which incorporates base building, resource gathering, economic functions, pathfinding, and tactical groupings, offers an inherently richer learning environment than the game of Go - albeit with significantly more complex compute requirements to do so. This is where mass parallelism could enable a significant breakthrough (provided an AI had the computational resources required).
The applications are significant and far reaching. Consider that machine learning is currently being used to improve the efficiency and accuracy of diagnoses for pathologists, for example. If we can diagnose people far more accurately and speedily than we currently do, we might lose far fewer people in the fight with cancer thanks to early care or even preventative treatments. Not only would the impact of this be evident in the medical field, but it would have a knock on impact in terms of the overall health and stability of the economy (the healthier we are as a population, the stronger our economies).
Deepmind Co-Founder Demis Hassabis makes a very clear summary of why we need to aggressively pursue the development of advanced AIs - very simply, “to improve the speed of breakthroughs”. Whether it’s health care, climate change, science or agriculture, AI enables faster progress on all levels and more quickly than we’ve ever experienced before. This would be an incredible accomplishment and a significant benefit to all of humanity.
HadeanOS is a cloud-first operating system that has been engineered and optimized for performance across massively distributed computing infrastructures. HadeanOS natively understands the dynamic scale and real-time demands of modern applications in the cloud and removes the need for complex operations and engineering.
Call 020 3514 1170 or get in touch using the form