A technique called deep learning could help Facebook understand its users and their data better.
By Tom Simonite.
Facebook is set to get an even better understanding of the 700 million people who share details of their personal lives using the social network each day.
A new research group within the company is working on an emerging and powerful approach to artificial intelligence known as deep learning, which uses simulated networks of brain cells to process data. Applying this method to data shared on Facebook could allow for novel features, and perhaps boost the company’s ad targeting.
Deep learning has shown potential to enable software to do things such as work out the emotions or events described in text even if they aren’t explicitly referenced, recognize objects in photos, and make sophisticated predictions about people’s likely future behavior.
The eight-strong group, known internally as the AI team, only recently started work, and details of its experiments are still secret. But Facebook’s chief technology officer, Mike Schroepfer, will say that one obvious place to use deep learning is to improve the news feed, the personalized list of recent updates he calls Facebook’s “killer app.” The company already uses conventional machine learning techniques to prune the 1,500 updates that average Facebook users could possibly see down to 30 to 60 that are judged to be most likely to be important to them. Schroepfer says Facebook needs to get better at picking the best updates due to the growing volume of data its users generate and changes in how people use the social network.
“The data set is increasing in size, people are getting more friends, and with the advent of mobile, people are online more frequently,” Schroepfer told MIT Technology Review. “It’s not that I look at my news feed once at the end of the day; I constantly pull out my phone while I’m waiting for my friend, or I’m at the coffee shop. We have five minutes to really delight you.”
Shroepfer says deep learning could also be used to help people organize their photos, or choose which is the best one to share on Facebook.
Facebook’s foray into deep learning sees it following its competitors Google and Microsoft, which have used the approach to impressive effect in the past year. Google has hired and acquired leading talent in the field (see “10 Breakthrough Technologies 2013: Deep Learning”), and last year created software that taught itself to recognize cats and other objects by reviewing stills from YouTube videos. The underlying deep learning technology was later used to slash the error rate of Google’s voice recognition services (see “Google’s Virtual Brain Goes to Work”).
Researchers at Microsoft have used deep learning to build a system that translates speech from English to Mandarin Chinese in real time (see “Microsoft Brings Star Trek’s Voice Translator to Life”). Chinese Web giant Baidu also recently established a Silicon Valley research lab to work on deep learning.
Less complex forms of machine learning have underpinned some of the most useful features developed by major technology companies in recent years, such as spam detection systems and facial recognition in images. The largest companies have now begun investing heavily in deep learning because it can deliver significant gains over those more established techniques, says Elliot Turner, founder and CEO of AlchemyAPI, which rents access to its own deep learning software for text and images.
“Research into understanding images, text, and language has been going on for decades, but the typical improvement a new technique might offer was a fraction of a percent,” he says. “In tasks like vision or speech, we’re seeing 30 percent-plus improvements with deep learning.” The newer technique also allows much faster progress in training a new piece of software, says Turner.
Conventional forms of machine learning are slower because before data can be fed into learning software, experts must manually choose which features of it the software should pay attention to, and they must label the data to signify, for example, that certain images contain cars.
Deep learning systems can learn with much less human intervention because they can figure out for themselves which features of the raw data are most useful to understanding it. They can even work on data that hasn’t been labeled, as Google’s cat recognizing software did. Systems able to do that typically use software that simulates networks of brain cells, known as neural nets, to process data, and require more powerful collections of computers to run.
Facebook’s AI group will work on both applications that can help the company’s products and on more general research on the topic that will be made public, says Srinivas Narayanan, an engineering manager at Facebook helping to assemble the new group. He says one way Facebook can help advance deep learning is by drawing on its recent work creating new types of hardware and software to handle large data sets (see “Inside Facebook’s Not-So-Secret New Data Center”). “It’s both a software and a hardware problem together; the way you scale these networks requires very deep integration of the two,” he says.