What powers Facebook and Google's AI – and how computers could mimic brains

January 6, 2016 by Thomas Nowotny, University Of Sussex, The Conversation
Credit: Akritasa, CC BY-SA

Google and Facebook have open sourced the designs for the computing hardware that powers the artificial intelligence logic used in their products. These intelligent algorithms power Google's search and recommendation functions, Facebook's Messenger digital assistant, M – and of course both firms' use of targeted advertising.

Facebook's bespoke computer servers, codenamed Big Sur, are packed with graphics processing units (GPU) – the graphics cards used in PCs to play the latest videogames with 3D graphics. So too is the hardware that powers Google's TensorFlow AI. So why is artificial intelligence computing built from graphics processors instead of mainstream computer processors?

Originally GPUs were designed as co-processors that operated alongside a computer's main central processing unit (CPU) in order to off-load demanding computational graphics tasks. Rendering 3D graphics scenes is what is known as an embarassingly parallel task. With no connection or interdependence between one area of an image and another, the job can be easily broken down into separate tasks which can be processed concurrently in parallel – that is, at the same time, so completing the job far more quickly.

It's this parallelism that has led GPU manufacturers to put their hardware to a radically different use. By optimising them so that they can achieve maximum computational throughput only on massively parallel tasks, GPUs can be turned into specialised processors that can run any parallelised code, not just graphical tasks. CPUs on the other hand are optimised to be faster at handling single-threaded (non-parallel) tasks, because most general purpose software is still single-threaded.

NVIDIA Tesla M40 GPU Accelerator. Credit: NVIDIA news

In contrast to CPUs with one, two, four or eight processing cores, modern GPUs have thousands: the NVIDIA Tesla M40 used in Facebook's servers has 3,072 so-called CUDA cores, for example. However, this massive parallelism comes at a price: software has to be specifically written to take advantage of it, and GPUs are hard to program.

What makes GPUs suitable for AI?

One of the reasons GPUs have emerged as the supercomputing hardware of choice is that some of the most demanding computational problems happen to be well-suited to parallel execution.

Facebook Big Sur server containing 8 NVIDIA Tesla M40 GPUs. Credit: Facebook

A prime example is deep learning, one of the leading edge developments in AI. The neural network concept that underpins this powerful approach – large meshes of highly interconnected nodes – is the same that was written-off as a failure in the 1990s. But now that technology allows us to build much larger and deeper this approach achieves radically improved results. These neural networks power the speech recognition software, language translation, and semantic search facilities that Google, Facebook and many apps use today.

Training a neural network so that it "learns" works similarly to establishing connections between neurons and strengthening those connections in the brain. Computationally, this learning process can be parallelised, so it can be accelerated using GPU hardware. This machine learning requires examples to learn from, and this also lends itself to easy acceleration using parallel processing. With open source machine learning tools such as the Torch code library and GPU-packed servers, neural network training can be achieved many times faster on GPU than CPU-based systems.

Titan supercomputer at the Oak Ridge National Laboratory. Credit: Oak Ridge National Laboratory

Are GPUs the future of computing?

For decades we have become accustomed to the version of Moore's law which holds that computer processing power will roughly double every two years. This has mainly been achieved through miniaturisation, which leads to less heat generation, which allows CPUs to be run faster. However, this "free lunch" has come to an end as semiconductors have been miniaturised close to silicon's theoretical, elemental limits. Now, the only credible route to greater speeds is through greater parallelism, as demonstrated with the rise of multi-core CPUs over the last ten years. GPUs, however, have a head start.

Besides AI, GPUs are also used for simulations of fluid and aerodynamics, physics engines and brain simulations, to name just a few examples. Some of the world's most powerful computers, such as the Titan supercomputer at Oak Ridge National Laboratory, currently the world's second fastest supercomputer, are built on Nvidia's GPU accelerators, while competitors include Intel's Phi parallel co-processor that powers Tianhe-2, the world's fastest supercomputer. However, not all problems are easily parallelisable, and programming for these environments is difficult.

Arguably, the future of computing, at least for AI, may lie in the even more radically different neuromorphic computers. IBM's True North chip is one, with another under development by the €1 billion Human Brain Project. In this model, rather than simulating neural networks with a network of many processors, the chip is the neural network: the individual silicon transistors on the chip form circuits that process and communicate via electrical signals – not dissimilar to neurons in biological brains.

Proponents argue that these systems will help us to finally scale up our neural networks to the size and complexity of the , bringing AI to the point where it can rival human intelligence. Others, particularly brain researchers, are more cautious – there may well be a lot more to the human brain than just its high number and density of neurons.

Either way, it's likely that what we now learn about the brain will be through the very supercomputers that are designed to ape the way it works.

Explore further: Engineers boost computer processor performance by over 20 percent

Related Stories

Graphics processors accelerate pattern discovery

August 26, 2015

Repeating patterns in complex biological networks can now be found hundreds of times faster using an algorithm that exploits the parallel computing capacity of modern graphics adapters. The A*STAR-led breakthrough opens the ...

NVIDIA GPUs power world's fastest supercomputer

October 29, 2010

(PhysOrg.com) -- NVIDIA has built the worldэs fastest supercomputer using 7,000 of its graphics processor chips. With a horsepower equivalent to 175,000 laptop computers, its sustained performance is equivalent to 2.5 ...

NVIDIA helps spark 64-bit ARM systems for HPC

June 23, 2014

(Phys.org) —NVIDIA could not have chosen a better venue for a chosen target: The International Supercomputing Conference, running to June-26 in Leipzig, Germany, is where NVIDIA took center stage, to demonstrate how server ...

Recommended for you

Your (social media) votes matter

January 24, 2017

When Tim Weninger conducted two large-scale experiments on Reddit - otherwise known as "the front page of the internet" - back in 2014, the goal was to better understand the ripple effects of malicious voting behavior and ...

Protective wear inspired by fish scales

January 24, 2017

They started with striped bass. Over a two-year period the researchers went through about 50 bass, puncturing or fracturing hundreds of fish scales under the microscope, to try to understand their properties and mechanics ...

'Droneboarding' takes off in Latvia

January 22, 2017

Skirted on all sides by snow-clad pine forests, Latvia's remote Lake Ninieris would be the perfect picture of winter tranquility—were it not for the huge drone buzzing like a swarm of angry bees as it zooms above the solid ...

Singapore 2G switchoff highlights digital divide

January 22, 2017

When Singapore pulls the plug on its 2G mobile phone network this year, thousands of people could be stuck without a signal—digital have-nots left behind by the relentless march of technology.

Making AI systems that see the world as humans do

January 19, 2017

A Northwestern University team developed a new computational model that performs at human levels on a standard intelligence test. This work is an important step toward making artificial intelligence systems that see and understand ...

0 comments

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.