By Jennifer Booton
Imagine Watson with reason and better communication skills.
The Watson supercomputer may be able to beat reigning Jeopardy champions, but scientists at IBM (IBM) are developing new, super-smart computer chips designed from the human brain — and that might ultimately prove much more impressive.
These new silicon “neurosynaptic chips,” which will be fed using about the same amount of energy it takes to power a light bulb, will fuel a software ecosystem that researchers hope will one day enable a new generation of apps that mimic the human brain’s abilities of sensory perception, action and cognition.
It’s akin to giving sensors like microphones and speakers brains of their own, allowing them to consume data to be processed through trillions of synapses and neurons in a way that allows them to draw intelligent conclusions.
IBM’s ultimate goal is to build a chip ecosystem with ten billion neurons and a hundred trillion synapses, while consuming just a kilowatt of power and occupying less than a two-liter soda bottle.
“We are fundamentally expanding the boundary of what computers can do,” said Dharmendra Modha, principal investigator of IBM’s SyNAPSE cognitive computing project. “This could have far reaching impacts on technology, business, government and society.”
The researchers envision a wave of new, innovative “smart” products derived from these chips that would alter the way humans live in virtually all walks of life, including commerce, logistics, location, society, even the environment.
“Modern computing systems were designed decades ago for sequential processing according to a pre-defined program,” IBM said in a release. “In contrast, the brain—which operates comparatively slowly and at low precision—excels at tasks such as recognizing, interpreting, and acting upon patterns.”
These chips would give way to a whole new “cognitive-type of processing,” said Bill Risk, who works on the IBM Research SyNAPSE Project, marking one of the most dramatic changes to computing since the traditional von Neumann architecture comprised of zeros and ones was adopted in the mid-1940s.
“These operations result in actions rather than just stored information, and that’s a whole different world,” said Roger Kay, president of Endpoint Technologies Associates, who has written about the research. “It really allows for a human-like assessment of problems.”
It is quite a complex system, and it is still in early stages of development. But IBM researchers have rapidly completed the first three phases of what will likely by a multi-stage project, collaborating with a number of academic partners and collecting some $53 million in funding. They are hopeful the pace of advancement will continue.
Modha cautioned, however, this new type of computing wouldn’t serve as a replacement for today’s computers but a complementary sibling, with traditional analog architecture serving as the left brain with its speed and analytic ability, and the next era of computing acting as the right cortex, operating much more slowly but more cognitively.
“Together, they help to complete the computing technology we have,” Modha said.
Providing a real-life example of how their partnership might one-day work, Kay imagined a medical professional giving triage to a patient.
Digital computers would provide basic functions such as the patient’s vitals, while the cognitive computer would cross reference data collected at the scene in real-time with stored information on the digital computer to assess the situation and provide relevant treatment recommendations.
“It could be a drug overdose or an arterial blockage, a human might not know which is which [from the naked eye],” explains Kay. “But a [cognitive] computer could read the symptoms, reference literature, then vote using a confidence level that can kind of infer which one is more likely the case.”
Endless Possibilities Seen
The IBM researchers have put together building blocks of data to make cognitive applications easier to build and to create an ecosystem for developers. The data come in the form of “corelets” that each serve a particular function, such as the ability to perceive sound or colors.
So far they have developed 150 corelets with the intention to eventually allow third parties to go through rigorous testing to submit more. Eventually, corelets could be used to build “real-life cognitive systems,” researchers hope.
To help get the ball rolling, the researchers envisioned a slew of product ideas that would make perfect use of these genius chips in real-world functions.
Here are just a few:
-An autonomous robot dubbed “Tumbleweed” could be deployed for search and rescue missions in emergency situations. Researchers picture the sphere-shaped device, outfitted with “multi-modal sensing” via 32 mini cameras and speakers, surveying a disaster and identifying people in need. It might be able to communicate with them, letting them know help is on its way or directing them to safety.
-For personal use, low-power, light-weight glasses could be designed for the near blind. Using these chips, which would recognize and analyze objects through cameras, they’d be able to plot a route through a crowded room with obstacles, directing the visually-impaired through speakers.
-Putting these chips to use in a business function, the researchers foresee a product they’ve dubbed the “conversation flower” that could process audio and video feeds on conference calls to identify specific people by their voice and appearance while automatically transcribing the conversation.
-Giving a glimpse into its potential use in the medical world, a thermometer could be developed that could not only measure temperature, but could also be outfitted with a camera that would be able to detect smell and recognize certain bacterial presence based on their unique odor, giving an alert if medical attention is needed.
-In an environmental function, researchers could see this technology being outfitted on sensor buoys, monitoring shipping lanes for safety and environmental protection.
Given the fluid motion of the project, it’s unclear how long it will take for the first generation of cognitive computers to begin applying themselves in real-world applications, but Modha and his team are optimistic they will be crafted sooner than later.
“We need cognitive systems that understand the environment, can deal with ambiguity and can act in a real-time, real-life context,” Modha said. “We want to create a brain in a box.”