Desktop PCs Supercharged to Rival Pricey Supercomputers
By Deborah Borfitz
March 15, 2021 | Researchers at the University of Sussex have figured out a way to give desktop PCs the same horsepower as supercomputers, which could help democratize research involving large-scale brain simulations. In a recent demonstration, they showed how a single turbocharged GPU could perform on par with the last-generation IBM Blue Gene/Q system, says James Knight, a research fellow in computer science at the university.
The technique is borrowed from earlier work by Russian mathematician and neuroscientist Eugene Izhikevich, co-founder, chairman, and CEO of Brain Corp, a company endeavoring to make robots as commonplace as computers and cell phones. He pioneered the method back in 2005, when computers were too slow for the idea to take hold, explains Knight.
The approach uses “procedural connectivity” to generate data about neuron connectivity on the fly as opposed to storing and retrieving information from memory, says Knight, and is a capability enabled by the computational power of modern GPUs. A detailed description of the innovation recently published in Nature Computational Science (DOI: 10.1038/s43588-020-00022-7), which Knight co-authored with Thomas Nowotny, professor of computer science at the University of Sussex.
Izhikevich previously worked at the Neurosciences Institute and Notwotny at the University of California, San Diego, which were separated by about half a mile, and knew one another. But Izhikevich was not a collaborator on the project, says Knight, a previous games developer who help adapt Call of Duty for mobile platforms.
Today’s GPUs have about 2,000 times the computing power of their predecessors 15 years ago, making them a "perfect match" for spiking neural networks, he notes. This type of artificial intelligence (AI) is typically needed to model the brain.
Importantly, the technique can be reproduced on any computer with a suitable GPU because the code is “fully open source and available to all,” Knight says, noting that instructions are included with the paper. That could open new opportunities in brain research, including the investigation of neurological disorders.
For this demonstration, the researchers simulated a recent model of the macaque visual cortex with 4 million neurons and 24 billion synapses—making it an exercise otherwise requiring a supercomputer. The model choice was a pragmatic one, as it was readily available with instructions for its reproduction, he says.
Initialization of the model took 6 minutes and simulation of each biological second took 7.7 minutes in the ground state and 8.4 minutes in the resting state, continues Knight. That is up to 35% less time than it took the IBM Blue Gene/Q system in a 2018 simulation, where one rack of the supercomputer took around 5 minutes to initialize the model and simulating one second of biological time took approximately 12 minutes.
With supercomputing, he points out, modeling is done by distributing memory across hundreds if not thousands of network-connected computers to come up with enough memory to store the synapses. The required communication between all those machines tends to slow things down.
The latest supercomputers have more memory per node, making them much faster than the IBM Blue Gene/Q system as well as the researchers’ GPU approach, Knight adds. But the machines all cost hundreds of millions of dollars, limiting their access to the few and the lucky.
Using procedural connectivity to accelerate brain simulations is also the greener alternative, from the standpoint of the environment, he says. Knight estimates that the new approach requires about 10 times less energy than the newest supercomputer.
He and Nowotny are now working on applying the technique to “bio-inspired machine learning,” which bridges the disciplines of mainstream AI and computational neuroscience. They are hopeful, Knight says, that brain-inspired machine learning will “help solve problems that biological brains excel at, but which are currently beyond mainstream AI.”