Pebble Sets Sights on Fitness Trackers with New App-Making Tools

Pebble unveils developer tools that allow for motion- and gesture-tracking apps for its smart watch.

By Rachel Metz

Smart watch maker Pebble unveiled updates on Wednesday to the software tools that developers can use to build apps for its wrist-worn device. The tools extend the watch’s capabilities and may put it in more direct competition with popular fitness-tracking devices like the Jawbone Up and the Nike Fuelband.

The changes come as a growing number of smart watch makers like Samsung and Sony and fitness-tracking competitors like Jawbone and Nike jostle for consumers’ wrists. The push also shows that Pebble recognizes the role that simple sensing could play in the emergence of more broadly appealing wearable computers (see “So Far, Smart Watches Are Pretty Dumb”).

While not the first smart watch on the market, the $150 Pebble was the first to really resonate with consumers and app makers. Last year, the company raked in $10 million on the crowdfunding site Kickstarter for its gadget, which connects to your smartphone and buzzes to alert you of incoming calls and messages. About 190,000 Pebble watches have been sold.

From the start, Pebble hoped that most of the watch’s functionality would come from third-party developers. But speaking in San Francisco this week, Pebble founder and CEO Eric Migicovsky said that, in the early days, the company’s biggest goal was simply to ship the watch to its tens of thousands of Kickstarter backers.

The software development kit (SDK), officially released in April, was “definitely an alpha” version that Pebble “pushed out kind of to see what people would do with it,” he said. That kit led to the development of over 2,000 apps ranging from simple watch faces and games to remote controls, but there was clearly a lot of work still to be done. “The bug request line was ringing,” Migicovsky said.

The new SDK gives developers the option to add motion- and gesture-tracking features to Pebble apps. Several sports-related apps are already available for the Pebble, but they aren’t able to take advantage of the watch’s accelerometer.

Developers will also be able to build apps that can cache a small amount of data on the watch when the owner’s smartphone is out of range and automatically transfer it to the smartphone later on. These features could help deal with dropped Bluetooth connections and could be useful for tracking activities like swimming and running, where the user is typically separated from his phone.

Additionally, Migicovsky said, the new SDK will make it easier for developers to build Pebble apps that work with both the iPhone and Android smartphones.

 

Pebble is also bringing in more established companies as app developers. The company said on Wednesday that it will be adding apps from Yelp and Foursquare, among others. Pebble will roll out a software update for users before the end of the year that will enable apps with these new features to work on existing watches, and these apps will become available at that time as well.

Also on Wednesday, Pebble announced an update to its smart watch software for customers who use the Pebble with an iPhone. It will let users choose to see any app notifications on their Pebble that they’ve elected to see on their iPhone. Previously, iOS notifications on the Pebble had been limited to just a handful of functions including calls, e-mails, and texts.

Texas firm makes world’s first 3D-printed metal gun

By Konrad Krawczyk/Digital Trends

Depending on who you are, where you hail from, and where you stand on guns, 3D printing and related issues, this bit of news will either thrill and astound you, terrify you, or compel you to say “meh.”

But here goes: A company by the name of Solid Concepts has made the world’s first metal gun using a 3D printer.

Based out of Austin, Texas, the 3D-printed metal pistol made by Solid Concepts is based on the Browning 1911 firearm. Solid Concepts set out to make this gun in an effort to prove that they can make weapons that are fit for “real world applications.”

To make the gun, Solid Concepts utilized a manufacturing process known as direct metal laser sintering, or DMLS. DMLS is a 3D manufacturing process used to make metal parts for the aerospace and medical industries. The application for DMLS in the latter example is specific to surgical tools, meaning it’s perfectly suited for the creation of precision firearms.

“The whole concept of using a laser sintering process to 3D Print a metal gun revolves around proving the reliability, accuracy, and usability of 3D Metal Printing as functional prototypes and end use products,” says Solid Concepts’ vice president of additive manufacturing Kent Firestone. “It’s a common misconception that laser sintering isn’t accurate or strong enough, and we’re working to change people’s perspective.”

While 3D printers are becoming more and more affordable all the time, don’t get the wrong idea: you can’t just slap down a couple thousand bucks for a MakerBot 3D printer and hope to make your own firearm from the comfort of your own garage.

“The industrial printer we used costs more than my college tuition (and I went to a private university),” said Alyssa Parkinson, a Solid Concepts rep. ”And the engineers who run our machines are top of the line; they are experts who know what they’re doing and understand 3D Printing better than anyone in this business.”

In other words, there’s a big difference between the gun made by Solid Concepts and the weapons made by Defense Distributed, a Texas-based firm that designed guns intended to be built using 3D printers in your home.

Robots trained to become less deadly

By Megan Gannon/LiveScience

Before humans can trust robots to work as grocery store cashiers, these machines will have to prove they can do certain things like not squishing our perfect heirloom tomatoes or stabbing us with new kitchen knives at the checkout line.

A group of researchers at Cornell University is teaching a robot dubbed Baxter how to handle, properly and safely, a variety of objects, from sharp knives to egg cartons, based on human feedback in a grocery-store scenario.

“We give the robot a lot of flexibility in learning,” Ashutosh Saxena, an assistant professor of computer science at Cornell, said in a statement. “The robot can learn from corrective human feedback in order to plan its actions that are suitable to the environment and the objects present.”

For their experiments, Saxena and colleagues had a Baxter robot set up as a cashier in a mock checkout line. Baxter is a cheap, flexible robot built by a Boston-based startup called Rethink Robotics. It was primarily designed to work in assembly lines alongside people, but Baxter’s learning skills also make it an easy-to-teach cashier.

As this video of the knife-wielding robot shows, the researchers are teaching Baxter how to handle different items by manually correcting Baxter’s arm motions.

If the robot swings a sharp kitchen knife, for example, too close to the human playing customer at the checkout, a researcher could grab Baxter’s arm and guide it in the right direction.

Over time the robot learns to associate different trajectories with different objects, such as a quick flip for a cereal box or a delicate lift for a carton of eggs, the researchers say.

Saxena and colleagues will present their work at the Neural Information Processing Systems conference in Lake Tahoe, Calif., next month, but an early version of their research paper is available online.

Exclusive – Hot tech start-up Box picks banks for ’14 IPO – sources

BY NICOLA LESKE AND OLIVIA ORAN

NEW YORK Fri Nov 8, 2013 2:37pm EST

(Reuters) – Data storage company Box, one of the most highly anticipated IPO candidates in Silicon Valley, has selected banks to lead a proposed initial public offering that could come in the first half of 2014, according to three people familiar with the matter.

The fast-growing technology start-up has selected Morgan Stanley, Credit Suisse andJPMorgan Chase & Co to lead the offering that could raise around $500 million, the people said.

Representatives for Box, Morgan Stanley and Credit Suisse did not immediately respond to requests for comment. JPMorgan declined to comment.

Box is one of several high-profile start-ups gearing up for an IPO, on the heels of a successful debut by Twitter Inc on Thursday, which raised more than $1.8 billion for the microblogging company.

Other closely watched startups which may be exploring an IPO include mobile payments company Square, Uber and Pinterest.

A public float for Box would come amid the strongest dollar volume for U.S. IPOs since 2000.

U.S. companies have raised $50.7 billion in proceeds year to date, a 26 percent increase compared to a year earlier, according to Thomson Reuters data.

This year is also the strongest year for the number of U.S. new listings since 2004.

Box, started in 2005 by University of Southern California drop-out Aaron Levie and his childhood friend Dylan Smith, has been valued at more than $1.2 billion by private investors, although it remains unclear whether the company is profitable.

The online storage company has tapped into growing demand by professional workers who increasingly want to share documents across different computers and has been locked in fierce competition with a number of rivals, including Dropbox, another privately held firm that is valued at $4 billion.

Box and Dropbox, which provide users with free storage but charge fees for additional space, have been able to steadily gain market share even though tech giants like Google Inc, Microsoft Corp and Apple Inc all offer their own versions of file-sharing utilities.

In 2011, Box rebuffed a takeover offer by Citrix Systems worth more than $500 million.

Inertial Sensors Boost Smartphone GPS Performance

Emerging Technology From the arXiv

GPS is power hungry and often suffers from poor signal strength in city centres. Now computer scientists have worked out how your smartphone’s inertial sensors can fill in the gaps.

If you’ve ever used a smartphone to navigate, you’ll know that one of the biggest problems is running out of juice. GPS sensors are a significant battery drain and so any journey of significant length requires some kind of external power source. Added to that is the difficulty in even getting a GPS signal in city centre locations where towering office blocks, bridges and tunnels regularly conspire to block the signal.

So a trick that reduces power consumption while increasing the device’s positioning accuracy would surely be of use.

Today, Cheng Bo at the Illinois Institute of Technology in Chicago and a few pals say they’ve developed just such a program, called SmartLoc, and have tested it extensively while travelling throughout the windy city.

They say that in the city, GPS has a positioning accuracy of about 40 metres. By comparison, their SmartLoc system pinpoints its location to within 20 metres, 90 per cent of the time.

So how have these guys achieved this improvement? The trick that Bo and pals use is to exploit the smartphone’s inertial sensors to determine its position whenever the GPS is off line.

The way this works is straightforward. Imagine a smartphone fixed to the windscreen of a car driving around town. Given a GPS reading to start off with, the smartphone knows where it is on its built-in or online map. It then uses the inertial sensor to measure its acceleration, indicating a move forwards or a turn to the left or right and so on.

By itself, this kind of data is not very useful because it’s hard to tell how far the vehicle has traveled and whether the acceleration was the result of the car speeding up or going over a humpback bridge, for example.

To get around this, the smartphone examines the section of road on the map looking for road layouts and features that might influence the sensors; things like bends in the road, traffic lights, humpback bridges and so on. Each of these has a specific inertial signature that the phone can spot. In this way, it can match the inertial signals to the road features at that point.

The key here is that each road feature has a unique signature. Bo and co have discovered a wide range of inertial signatures, such as the deceleration, waiting and acceleration associated with a set of traffic lights, the forces associated with turnings (and how these differ from the forces generated by changing lanes, for example) and even the change in the force of gravity when going over a bridge.

Having gathered this data, the SmartLoc program looks for these signatures while the car is on the move. These guys have tested it using a Galaxy S3 smartphone on the city streets in Chicago and say it works well. They point out that in the city centre, the GPS signal can disappear for distances of up to a kilometre, which would leave a conventional navigation system entirely confused.

However, SmartLoc simply fills in the gaps using its inertial signature database and a map of the area. “Our extensive evaluations shows that SmartLoc improves the localization accuracy to less than 20m for more than 90% roads in Chicago downtown, compared with ≥ 50% with raw GPS data,” they say.

That certainly looks handy. And this kind of performance could also help save battery power by allowing a smartphone to periodically switch off the GPS sensor and run only using the inertial sensor.

What Bo and co don’t do is explain their plans for their new system. One obvious idea would be to release it as an app–it clearly already works on the Android platform. Another idea would be to sell the technology to an existing mapping company. Perhaps they’re planning both. Whatever the goal, it seems worth keeping an eye on.

Twitter Must Metamorphose Carefully as It Goes Public

Twitter may make major interface changes to address the growing need to make money.

By Josh Dzieza

Last week Twitter underwent one of the biggest redesigns in its seven-year history, but you’d be forgiven for missing it. Embedded images and video are now displayed automatically in the updates you see, instead of requiring a click to expand and view. Buttons for “retweeting,” “replying,” and “favoriting” tweets were also brought to the surface, cutting in half the number of clicks needed to interact with a tweet.

Which is not to say the changes are insignificant. Indeed, they are a sign of things to come, as Twitter tries to balance its simple appeal and the demands of its users with a growing need to make money.

 

As Twitter nears its IPO, the new presence of images and videos may help woo people currently using InstagramSnapchat, or other rapidly growing social photo-sharing services. They certainly make Twitter more appealing to advertisers: previously users would have to click on a promoted tweet to see an image; now it’s in your face. (After the update, some people joked that Twitter just launched banner ads.) The newly prominent social buttons will also encourage more interaction, making it easier for Twitter’s many lurkers to engage, and lowering the threshold for tweets to go viral.

Twitter has been signaling for some time that a more radical redesign is imminent. As a soon-to-be publicly traded company, it needs to increase users, and one way to do that would be to find a way of diminishing the number of people who sign up for Twitter, can’t figure out what to do with it, and never come back. Last month a Reuters/Ipsos poll found that 36 percent of people who joined Twitter say they don’t use it, citing a lack of friends on Twitter, and confusion over how to use it and what it was for, among other reasons. In comparison, only 7 percent of Facebook members say they don’t use the site after signing up. Possibly the rumored television stream—a separate column for people discussing television shows, and for broadcasters to promote shows, and for companies to place ads across both screens—could serve this purpose without disrupting the main feed. And possibly Twitter could continue its attempts to recommend content it thinks people will be interested in, carrying on the work of the neglected Discover tab—a personalized stream of top stories and tweets that will reportedly be cut—in some other form.

But the Discover column’s neglect indicates an important challenge for Twitter. Its users are reluctant to take too much heavy guidance, and they have the perfect venue for venting their displeasure if they disagree with changes. Even the minor addition of blue lines to sort Twitter conversations into groups elicited a backlash, though that appears to have faded. Twitter is also driven disproportionately by the activity of a small coterie of power users, some of whom have several million followers.

Twitter’s light touch with redesigns shows that it knows this. The challenge will be keeping this in mind going into the IPO—as pressure to make money inevitably increases.

In contrast to Twitter, Facebook has undergone major overhauls of its user interface several times, each of them usually accompanied by howls of outrage and petitions (on Facebook) to roll them back. Twitter looks extremely similar to when it launched in 2006. Many of Twitter’s redesigns amounted to adjusting its interface and features to better accommodate things its users are already doing, rather than foisting new features upon them. Some of Twitter’s most iconic features, like the hashtag and retweet, were first created by users before Twitter built them into the architecture of the site.

“Facebook tends to build what they want for their users rather than listening to users and building what they want,” says Brian Blau, an analyst who covers Twitter for Gartner—“not that one is good or bad.” He attributes the difference partly to the two sites’ different goals: “Facebook has much broader ambitions, to connect the world, and when you say that you can think about different ways of connecting people—the wall, timeline, news feed. You can change the user interface, and people may not like it, but they like being on Facebook so they tolerate it, and now they don’t remember.” Facebook, it’s worth pointing out, is more embedded in users’ real-world social lives, making it harder to quit or ignore.

Twitter, he says, has stayed very focused on a single pillar: real-time, short-form communication. It has kept its focus even though Twitter’s original constraint, the 140-character limit, was a limit imposed by the SMS texting the site originally used, and no longer applies.

“Twitter’s beauty is its simplicity and its creativity is its constraint, 140 characters,” says S. Shyam Sundar, the founder of the Media Effects Research Laboratory at Penn State. When your form is your function, Sundar says, it creates certain constraints when it comes to redesigns. You can add videos and images and shortened links to tweets, but if you touch the format of short messages presented in a reverse-chronological stream, Twitter won’t be Twitter.

So far, when Twitter has made design tweaks, it has tended toward giving users greater latitude in how they use the site rather than directing them how to use it (as Facebook might do). When the first Twitter users signed on, the site prompted them with the question, “What are you doing?” As Twitter moved from a microblogging platform often mocked for its mundanity to a place where people posted about news and events, that injunction was swapped out for the more open ended, “What’s happening?” Today it’s simply, “Compose a new tweet.”

As people started using Twitter as a way to share and discover hyperlinks to interesting content as much as a blogging platform, Twitter accommodated them, developing its own URL shortening service. After images became one of Twitter’s major functions—the twitpic of the plane that crash-landed in the Hudson River was a turning point—it decided to host its own photos. Even the “trending topic” chart in the margin, a major new feature in 2009, simply gave a more prominent location to information about what was already happening on Twitter.

So far Twitter has stayed remarkably dedicated to its original interface, taking a hands-off approach to how its 230 million users want to use it. But it will soon have another powerful bunch of people—investors—who also want to be heard.

French online start-up Criteo shares pop in market debut

By Leila Abboud and Jennifer Saba.

(Reuters) – Shares in French online advertising firm Criteo rose more than 30 percent in its stock market debut on Nasdaq on Wednesday, showing investor appetite for technology start-ups and delivering a payday to its venture capital backers.

Shares in the company, which uses tracking technology to target ads at consumers surfing the web, opened at $31 and were at $41.40 by 1625 GMT, giving the eight-year old start-up a market capitalization of roughly $2.3 billion.

The sale of 8.08 million shares raised $250 million for the Paris-based company that will be used to fuel its international expansion and growth.

The size of the sale and the initial price were raised twice because of investor demand.

The success of Criteo’s share sale is a sign of investor interest in technology listings against the backdrop of a broader rally of the S&P 500 information technology index and just weeks before the much-anticipated market debut of social network Twitter.

Criteo is one of a number of companies, including Google and Facebook, to benefit from the on-line ad boom, the result of major companies following their audience to the web and away from newspapers and magazines.

Founded in Paris by Jean-Baptiste Rudelle in 2005, the start-up became a darling among online advertisers by boosting the rate at which Internet surfers click on display ads.

The company developed a technology known as “re-targeting” which catches users who have visited a shopping website without buying anything, and then showing them ads for similar items on other sites to tempt them back.

Criteo’s customers, including travel website Hotels.com, telecom operator Orange, and retailer Macy’s, only pay when a web surfer actually clicks on the ad.

In a rare move among French start-up founders, Rudelle moved to Silicon Valley to expand the company that is in 37 countries.

“The U.S. is our number one market today, and a very strategic market for us,” said Rudelle, explaining the choice of listing in New York instead of Paris.

“Being listed on the Nasdaq says that we are here to stay and committed to our clients and partners.”

Criteo has roughly doubled its revenues every year since 2010 to reach 271.9 million euros in 2012. It made a profit of 800,000 euros last year but swung to a loss of 4.9 million in the first six months of 2013 because of increased investments.

There have been 26 U.S. technology listings this year, according to Thomson Reuters data, compared with 30 in 2012.

The sale could herald a pay-day for venture capital firms, which have ploughed some $64 million into Criteo.

Geneva-based Index Ventures was the largest shareholder with a 23.4 percent stake before the share sale. Others include Idinvest Partners with 22.6 percent, Elaia Partners with 13.5 percent and Bessemer Venture Partners with 9.5 percent.

All the funds will be selling relatively small portions of their stakes in the listing, according to the offer documents.

Rudelle will own 8.4-8.6 percent of the group.

JP Morgan, Deutsche Bank Securities and Jefferies are the lead underwriters for the issue.

The Clever Circuit That Doubles Bandwidth

A Stanford startup’s new radio can send and receive information on the same frequency—an advance that could double the speed of  wireless networks.

By David Talbot

A startup spun out of Stanford says it has solved an age-old problem in radio communications with a new circuit and algorithm that allow data to be sent and received on the same radio frequency—thus doubling wireless capacity, at least in theory.

The company, Kumu Networks, has demonstrated the feat in a prototype and says it has agreed to run trials of the technology with unspecified major wireless carriers early next year.

The underlying technology, known as full-duplex radio, tackles a problem known as “self-interference.” As radios send and receive signals, the ones they send are billions of times stronger than the ones they receive. Any attempt to receive data on any given frequency is thwarted by the fact that the radio’s receiver is also picking up its own outgoing signal.

For this reason, most radios—including the ones in your smartphone, the base stations serving them, and Wi-Fi routers—send information out on one frequency and receive on another, or use the same frequency but rapidly toggle back and forth. Because of this inefficiency, radios use more wireless spectrum than is necessary.

To solve this, Kumu built an extremely fast circuit that can predict, moment by moment, how much interference a radio’s transmitter is about to create, and then generates a compensatory signal to cancel it out. The circuit generates a new signal with each packet of data sent, making it possible to work even in mobile devices, where the process of canceling signals is more complex because the objects they bounce off are constantly changing. “This was considered impossible to do for the past 100 years,” says Sachin Katti, assistant professor of electrical engineering and computer science at Stanford, and Kumu’s chief executive and cofounder.

Other companies, including satellite modem maker Comtech, previously used self-cancellation to boost bandwidth on satellite communications. But the Stanford team is the first to demonstrate it in the radios used in networks such as LTE and Wi-Fi, which required cancelling signals that are five orders of magnitude stronger. (More details can be found in this paper.)

Jeff Reed, director of the wireless research center at Virginia Tech, says the new radio rig appears to be a major advance, but he’s awaiting real-world results. “If their claims are true, those are some very impressive numbers,” Reed says. “It requires very precise timing to pull this off.”

This full-duplex technology isn’t the only trick that can seemingly pull new wireless capacity out of thin air. New ways of encoding data stand the chance of making wireless networks as much as 10 times more efficient in some cases (see “A Bandwidth Breakthrough”). Various research efforts are honing new ultrafast sensing and switching tricks to change frequencies on the fly, thus making far better use of available spectrum (see “Frequency Hopping Radio Wastes Less Spectrum”). And emerging software tools allow rapid reconfiguration of wired and wireless networks, creating new efficiencies (see “TR10: Software-Defined Networking”). “A lot of the spectrum is massively underutilized, and this is one of the tools to throw in there to make better use of spectrum,” says Muriel Medard, a professor at MIT’s Research Laboratory of Electronics, and a leader in the field of network coding.

 

Kumu’s technology—even if it works perfectly—won’t provide a big benefit in all situations. In cases where most traffic is going in one direction—such as during a video download—full-duplex technology opens up capacity that you don’t actually need, like adding inbound lanes during evening outbound rush-hour traffic. Nonetheless, Katti sees benefits “on every wireless device in existence from cell phones and towers to Wi-Fi to Bluetooth and everything in between.” Kumu Networks has received $10 million from investors, including Khosla Ventures and New Enterprise Associates.

Startup Gets Computers to Read Faces, Seeks Purpose Beyond Ads

A technology for reading emotions on faces can help companies sell candy. Now its creators hope it also can take on bigger problems.

Last year more than 1,000 people in four countries sat down and watched 115 television ads, such as one featuring anthropomorphized M&M candies boogying in a bar. All the while, webcams pointed at their faces and streamed images of their expressions to a server in Waltham, Massachusetts.

In Waltham, an algorithm developed by a startup company called Affectiva performed what is known as facial coding: it tracked the panelists’ raised eyebrows, furrowed brows, smirks, half-smirks, frowns, and smiles. (Watch a video of the technology in action below this story or here.) When this face data was later merged with real-world sales data, it turned out that the facial measurements could be used to predict with 75 percent accuracy whether sales of the advertised products would increase, decrease, or stay the same after the commercials aired. By comparison, surveys of panelists’ feelings about the ads could predict the products’ sales with 70 percent accuracy.

Although this was an incremental improvement statistically, it reflected a milestone in the field of affective computing. While people notoriously have a hard time articulating how they feel, now it is clear that machines can not only read some of their feelings but also go a step farther and predict the statistical likelihood of later behavior.

Given that the market for TV ads in the United States alone exceeds $70 billion, insights from facial coding are “a big deal to business people,” says Rosalind Picard, who heads the affective computing group at MIT’s Media Lab and cofounded the company; she left the company earlier this year but is still an investor.

Even so, facial coding has not yet delivered on the broader, more altruistic visions of its creators. Helping to sell more chocolate is great, but when will facial coding help people with autism read social cues, boost teachers’ ability to see which students are struggling, or make computers empathetic?

Answers may start to come next month, when Affectiva launches a software development kit that will let its platform be used for approved apps. The hope, says Rana el Kaliouby, the company’s chief science officer and the other cofounder (see “Innovators Under 35: Rana el Kaliouby”), is to spread the technology beyond marketing. While she would not name the actual or potential partners, she said that “companies can use our technology for anything from gaming and entertainment to education and learning environments.”

Applications such as educational assistance—informing teachers when students are confused, or helping autistic kids read emotions on other people’s faces—figured strongly in the company’s conception. Affectiva, which launched four years ago and now has 35 employees and $20 million in venture funding, grew out of the Picard lab’s manifesto declaring that computers would do society a service if they could recognize and react to human emotions.

Over the years, the lab mocked up prototype technologies. These included a pressure-sensing mouse that could feel when your hand clenched in agitation; a robot called Kismet that could smile and raise its eyebrows; the “Galvactivator,” a skin conductivity sensor to measure heartbeat and sweating; and the facial coding system, developed and refined by el Kaliouby.

Affectiva bet on two initial products: a wrist-worn gadget called the Q sensor that could measure skin conductance, temperature, and activity levels (which can be indicators of stress, anxiety, sleep problems, seizures, and some other medical conditions); and Affdex, the facial coding software. But while the Q sensor seemed to show early promise (see “Wrist Sensor Tells You How Stressed Out You Are” and “Sensor Detects Emotions through the Skin”), in April the company discontinued the product, seeing little potential market beyond researchers working on applications such as measuring physiological signs that presage seizures. That leaves the company with Affdex, which is mainly being used by market research companies, including Insight Express and Millward Brown, and consumer product companies like Unilever and Mars.

 

Now, as the company preps its development kit, the market research work may provide an indirect payoff. After spending three years convening webcam-based panels around the world, Affectiva has amassed a database of more than one billion facial reactions. The accuracy of the system could pave the way for applications that read the emotions on people’s faces using ordinary home computers and portable devices. “Affectiva is tackling a hugely difficult problem, facial expression analysis in difficult and unconstrained environments, that a large portion of the academic community has been avoiding,” says Tadas Baltrusaitis, a doctoral student at the University of Cambridge, who has written several papers on facial coding.

What’s more, by using panelists from 52 countries, Affectiva has been teasing out lessons specific to gender, culture, and topic. Facial coding has particular value when people are unwilling to self-report their feelings. For example, el Kaliouby says, when Indian women were shown an ad for skin lotion, every one of them smiled when a husband touched his wife’s midriff—but none of the women would later acknowledge or mention that scene, much less admit to having enjoyed it.

Education may be ripe for the technology. A host of studies have shown the potential; one by researchers at the University of California, San Diego—who have founded a competing startup called Emotient —showed that facial expressions predicted the perceived difficulty of a video lecture and the student’s preferred viewing speed. Another showed that facial coding could measure student engagement during an iPad-based tutoring session, and that these measures of engagement, in turn, predicted how the students would later perform on tests.

Such technologies may be particularly helpful to students with learning disabilities, says Winslow Burleson, an assistant professor at Arizona State University, author of a paper describing these potential uses of facial coding and other technologies. Similarly, the technology could help clinicians tell whether a patient understands instructions. Or it could improve computer games by detecting player emotions and using that feedback to change the game or enhance a virtual character.

Taken together, the insights from many such studies suggest a role for Affdex in online classrooms, says Picard. “In a real classroom you have a sense of whether the students are actively attentive,” she says. “As you go to online learning, you don’t even know if they are there. Now you can measure not just whether they are present and attentive, but if you are speaking—if you crack a joke, do they smile or smirk?”

Nonetheless, Baltrusaitis says many questions remain about which emotional states in students are relevant, and what should be done when those states are detected. “I think the field will need to develop a bit further before we see this being rolled out in classrooms or online courses,” he says.

The coming year should reveal a great deal about whether facial coding can have benefits beyond TV commercials. Affdex faces competition from other apps and startups, and even some marketers remain skeptical that facial coding is better than traditional methods of testing ads. Not all reactions are expressed on the face, and many other measurement tools claim to read people’s emotions, says Ilya Vedrashko, who heads a consumer intelligence research group at Hill Holliday, an ad agency in Boston.

Yet with every new face, the technology gets stronger. That’s why el Kaliouby believes it is poised to take on bigger problems. “We want to make facial coding technology ubiquitous,” she says.

AI Startup Says It Has Defeated Captchas

Brain-mimicking software can reliably solve a test meant to separate humans from machines.

Captchas, those hard-to-read jumbles of letters and numbers that many websites use to foil spammers and automated bots, aren’t necessarily impossible for computers to handle. An artificial-intelligence company called Vicarious says its technology can solve numerous types of Captchas more than 90 percent of the time.

It’s not the first time that computer scientists have managed to fool this method of separating man from machine. But Vicarious says its technique is more reliable and more useful than others because it doesn’t require mountains of training data for it to recognize letters and numbers consistently. Nor does it take a lot of computing power. Vicarious does it with a visual perception system that can mimic the brain’s ability to process visual information and recognize objects.

 

The purposes go well beyond Captchas: Vicarious hopes to eventually sell systems that can easily extract text and numbers from images (such as in Google’s Street View maps), diagnose diseases by checking out medical images, or let you know how many calories you’re about to eat by looking at your lunch. “Anything people do with their eyes right now is something we aim to be able to automate,” says cofounder D. Scott Phoenix.

Vicarious expands on an old idea of using an artificial neural network that is modeled on the brain and builds connections between artificial neurons (see “10 Breakthrough Technologies: Deep Learning”). One big difference in Vicarious’s approach, says cofounder Dileep George, is that its system can be trained with moving images rather than only static ones.

Vicarious set its cognition algorithms to work on solving Captchas as a way of testing its approach. After training its system to recognize numbers and letters, it could solve Captchas from PayPal, Yahoo, Google, and other online services. The company says its average accuracy rate ranges from 90 to 99 percent, depending on the type of Captcha (for example, some feature characters arranged within a grid of rectangles, while others might have characters in front of a wavy background). The system performed best with Captchas composed of letters that look like they’re made out of fingerprints.

“Captcha” stands for “completely automated public Turing test to tell computers and humans apart.” They were created in 2000 by researchers at Carnegie Mellon University and are solved by millions of Web users daily.

That’s not about to change: Vicarious isn’t going to release its system publicly. And besides, as Luis von Ahn, one of the creators of the Captcha, points out, many people have shown evidence of computerized Captcha-solving over the years. Von Ahn even helpfully passed along a link to a list of such instances.