Hardware designed specifically to run complex neural networks could let personal devices make sense of the world.
By Tom Simonite
A powerful approach to artificial intelligence could be coming to smartphones.
Researchers from Purdue University are working to commercialize designs for a chip to help mobile processors make use of the AI method known as deep learning. Although the power of deep learning has inspired companies including Google, Facebook, and Baidu to invest in the technology, so far it has been limited to large clusters of high-powered computers. When Google developed software that learned to recognize cats from YouTube videos, the experiment required 16,000 processors (see “Self-Taught Software”).
Being able to implement deep learning in more compact and power-efficient ways could lead to smartphones and other mobile devices that can understand the content of images and video, saysEugenio Culurciello, a professor at Purdue working on the project. In December, at the Neural Information Processing Systems conference in Nevada, the group demonstrated that a co-processor connected to a conventional smartphone processor could help it run deep learning software. The software was able to detect faces or label parts of a street scene. The co-processor’s design was tested on an FPGA, a reconfigurable chip that can be programmed to test a new hardware design without the considerable expense of fabricating a completely new chip.
The prototype is much less powerful than systems like Google’s cat detector, but it shows how new forms of hardware could make it possible to use the power of deep learning more widely. “There’s a need for this,” says Culurciello. “You probably have a collection of several thousand images that you never look at again, and we don’t have a good technology to analyze all this content.”
Devices such as Google Glass could also benefit from the ability to understand the abundant pictures and videos they are capturing, he says. A person’s images and videos might be searchable using text—”red car” or “sunny day with Mom,” for example. Likewise, novel apps could be developed that take action when they recognize particular people, objects, or scenes.
Deep learning software works by filtering data through a hierarchical, multilayered network of simulated neurons that are individually simple but can exhibit complex behavior when linked together (see “Deep Learning”). Computers are inefficient at running those networks because they are very different from conventional software.
Purdue’s co-processor design is specialized to run multilayered neural networks above all else and to put them to work on streaming imagery. In tests, the prototype has proven about 15 times as efficient as using a graphics processor for the same task, and Culurciello believes that improvements to the system could make it 10 times more efficient than it is now.
Narayan Srinivasa, director of the center for neural and emergent systems at HRL Laboratories, a research lab jointly owned by Boeing and General Motors, says it makes sense to use a co-processor to help implement deep learning networks more efficiently. That’s because in conventional computers, a processor and its memory reside in separate chunks of hardware. By contrast, the operations of deep learning-style neural networks and the real neural networks they are inspired by intertwine memory and processing. Narayan’s own research focuses on addressing that problem with a more extreme solution – designing chips with silicon neurons and synapses that mimic those of real brains (see “Thinking in Silicon”).
The Purdue group’s solution doesn’t represent such a fundamental rethinking of how computer chips operate. That may limit how efficiently their designs can run deep learning neural networks but also make it easier to get them into real-world use. Culurciello has already started a company, called TeraDeep, to commercialize his designs.
“The idea is that we sell the IP to implement this so that a large manufacturer like Qualcomm or Samsung or Apple could add this functionality to their processor so they could process images,” says Culurciello. Yann LeCun, a pioneer of deep learning at New York University who recently started leading Facebook’s research in the area, is an advisor to the company.