Early Access
Contact Us
Menu
Early Access
Contact Us

Solving Parallelism: Building AI beyond Moore's Law

May 29, 2018 7:00:00 AM

Most of us are familiar with Moore’s Law: the idea that the number of transistors in a circuit are increasing exponentially and doubling every one to two years. This phenomenon is responsible for our increasing capacity to compute information much, much faster. It continually unlocks a wealth of different opportunities for us - everything from advanced medical imaging, to the development of sophisticated communication technologies, to the decoding of the human genome. All of the benefits of what Moore’s Law predicted back in the ‘70s have had an irrefutable impact on the economy and the way the modern world operates.

Moore foresaw the explosion of computer technology and computing power and the ways this would transform the world as we knew it. Computer chips (or integrated circuits) that can fit more and more electric components on them have helped lay the foundations of Artificial Intelligence and pave the way for the AI products and services of the future. Not only did Moore predict the future of the computer chip, but he also managed to propel its success forward by stressing how important it would be to future endeavours.

We are now at a time when continued miniaturisation of microchips is reaching physical limits. At the sub 10nm process size (where the smallest chip features such as transistors are 10 nm or lower in size) a host of new issues crops up as a result of the laws of physics. This is where the time-honoured Maxwell’s electromagnetic equations cease to be effective and instead quantum phenomena begin to dominate. The problem exacerbates to a degree where instead of the easier analysis of classical physics circuit behaviour requires microelectronic engineers to turn to quantum electrodynamics (QED) to understand and accurately design circuits. This means that processors have stopped advancing in-line with Moore’s Law since 2013 and the progress in silicon design and fabrication is only going to continue slowing over the coming years.

GettyImages-870184586-compressor

This has serious implications for AI. The current cutting-edge of AI is in deep learning and capsule networks -- specific neural networks that aim to more closely mimic the human brain. So how do we actually get AI to advance and begin to take over problems that require human-levels of intelligence. A lot of it comes down to the huge amounts of data that are used to train Artificial Intelligence programmes and machines. Right now, we use algorithms to train AI. And not traditional serial algorithms, but something that is known as algorithmic parallelism - that is, the use of many different algorithms that can run separately on multiple processing devices before returning together at the end of their calculations. It’s a complex act to achieve.

Parallelism itself is not new. As early as the 1970’s the tools to understand concurrent programs and mathematically prove their correctness had become a central tool in the endeavour to build parallel computers such as the Transputer. More recently, the rise of GPUs (Graphics Processing Units) from the games industry has made widely available a processor capable of parallel computing to a previously unseen degree. Such devices are said to possess the property of ‘massive parallelism’ and algorithms that utilise them are said to be massively parallel.

AI also runs on silicon-based devices. While a modern GPU may have thousands of simple processing units, the human brain consists of some one-hundred billion neurons. While modern computers have logic gates that switch billions of times a second making them excellent at arithmetic, the human brain has orders of magnitude more parallelism than any computer architecture comes close to achieving.

Furthermore, the sort of parallel computing to which GPUs are applied are highly specialised and not universal in the manner CPUs are. These programs have some well-defined properties -- a certain mathematical symmetry -- that GPUs are designed to exploit. Almost all algorithms do not possess such properties and thus the workhouse of much of parallel computing still remains the CPU. The approaches to build and scale such parallel programs requiring the use of distributed hardware -- i.e. many networked computers working on portions of the problem and communicating frequently in order to arrive at a unified solution.

BN-7Jwi_OR_201774731-980x620

Currently, parallelism stemming from distributed computing requires huge investment from businesses, including substantial hardware costs and considerable work from data scientists who are still struggling to work with distributed algorithms at scale. A good example is the big data industry that has created an entire ecosystem around technologies such as the Apache Hadoop and Spark ecosystem of big technologies. Ostensibly, one would be able to apply these technologies to AI and machine learning algorithms at massive scale. In practice you need large teams with highly specialised expertise across the specific tools and technologies in the big data ecosystem.

In order to get anywhere close to solving these problems, we’re working at a software level to create a new operating system that strips out any unnecessary layers of software (of which there is much!) while providing programmers with semantics for expressing parallel and distributed programs. These semantics, called Hadean Processes, are provably correct and have guaranteed properties that make scalability effortless. This will allow scientists to run algorithms far more quickly and easily and even build AI algorithms spanning many many thousands of CPUs. This is currently the only viable route to achieve the sort of parallelism possessed by the human brain.

One problem remains -- ultimately the human brain is not just a massively-parallel computer. It is also highly power efficient thanks to its architecture. To that end we intend, further down the line, to implement Hadean processes directly onto hardware to achieve even greater breakthroughs in power consumption, performance, and massive parallelism.

You May Also Like

These Stories on Spatial Simulation

Subscribe by Email

No Comments Yet

Let us know what you think