Exclusive – Hot tech start-up Box picks banks for ’14 IPO – sources

BY NICOLA LESKE AND OLIVIA ORAN

NEW YORK Fri Nov 8, 2013 2:37pm EST

(Reuters) – Data storage company Box, one of the most highly anticipated IPO candidates in Silicon Valley, has selected banks to lead a proposed initial public offering that could come in the first half of 2014, according to three people familiar with the matter.

The fast-growing technology start-up has selected Morgan Stanley, Credit Suisse andJPMorgan Chase & Co to lead the offering that could raise around $500 million, the people said.

Representatives for Box, Morgan Stanley and Credit Suisse did not immediately respond to requests for comment. JPMorgan declined to comment.

Box is one of several high-profile start-ups gearing up for an IPO, on the heels of a successful debut by Twitter Inc on Thursday, which raised more than $1.8 billion for the microblogging company.

Other closely watched startups which may be exploring an IPO include mobile payments company Square, Uber and Pinterest.

A public float for Box would come amid the strongest dollar volume for U.S. IPOs since 2000.

U.S. companies have raised $50.7 billion in proceeds year to date, a 26 percent increase compared to a year earlier, according to Thomson Reuters data.

This year is also the strongest year for the number of U.S. new listings since 2004.

Box, started in 2005 by University of Southern California drop-out Aaron Levie and his childhood friend Dylan Smith, has been valued at more than $1.2 billion by private investors, although it remains unclear whether the company is profitable.

The online storage company has tapped into growing demand by professional workers who increasingly want to share documents across different computers and has been locked in fierce competition with a number of rivals, including Dropbox, another privately held firm that is valued at $4 billion.

Box and Dropbox, which provide users with free storage but charge fees for additional space, have been able to steadily gain market share even though tech giants like Google Inc, Microsoft Corp and Apple Inc all offer their own versions of file-sharing utilities.

In 2011, Box rebuffed a takeover offer by Citrix Systems worth more than $500 million.

Inertial Sensors Boost Smartphone GPS Performance

Emerging Technology From the arXiv

GPS is power hungry and often suffers from poor signal strength in city centres. Now computer scientists have worked out how your smartphone’s inertial sensors can fill in the gaps.

If you’ve ever used a smartphone to navigate, you’ll know that one of the biggest problems is running out of juice. GPS sensors are a significant battery drain and so any journey of significant length requires some kind of external power source. Added to that is the difficulty in even getting a GPS signal in city centre locations where towering office blocks, bridges and tunnels regularly conspire to block the signal.

So a trick that reduces power consumption while increasing the device’s positioning accuracy would surely be of use.

Today, Cheng Bo at the Illinois Institute of Technology in Chicago and a few pals say they’ve developed just such a program, called SmartLoc, and have tested it extensively while travelling throughout the windy city.

They say that in the city, GPS has a positioning accuracy of about 40 metres. By comparison, their SmartLoc system pinpoints its location to within 20 metres, 90 per cent of the time.

So how have these guys achieved this improvement? The trick that Bo and pals use is to exploit the smartphone’s inertial sensors to determine its position whenever the GPS is off line.

The way this works is straightforward. Imagine a smartphone fixed to the windscreen of a car driving around town. Given a GPS reading to start off with, the smartphone knows where it is on its built-in or online map. It then uses the inertial sensor to measure its acceleration, indicating a move forwards or a turn to the left or right and so on.

By itself, this kind of data is not very useful because it’s hard to tell how far the vehicle has traveled and whether the acceleration was the result of the car speeding up or going over a humpback bridge, for example.

To get around this, the smartphone examines the section of road on the map looking for road layouts and features that might influence the sensors; things like bends in the road, traffic lights, humpback bridges and so on. Each of these has a specific inertial signature that the phone can spot. In this way, it can match the inertial signals to the road features at that point.

The key here is that each road feature has a unique signature. Bo and co have discovered a wide range of inertial signatures, such as the deceleration, waiting and acceleration associated with a set of traffic lights, the forces associated with turnings (and how these differ from the forces generated by changing lanes, for example) and even the change in the force of gravity when going over a bridge.

Having gathered this data, the SmartLoc program looks for these signatures while the car is on the move. These guys have tested it using a Galaxy S3 smartphone on the city streets in Chicago and say it works well. They point out that in the city centre, the GPS signal can disappear for distances of up to a kilometre, which would leave a conventional navigation system entirely confused.

However, SmartLoc simply fills in the gaps using its inertial signature database and a map of the area. “Our extensive evaluations shows that SmartLoc improves the localization accuracy to less than 20m for more than 90% roads in Chicago downtown, compared with ≥ 50% with raw GPS data,” they say.

That certainly looks handy. And this kind of performance could also help save battery power by allowing a smartphone to periodically switch off the GPS sensor and run only using the inertial sensor.

What Bo and co don’t do is explain their plans for their new system. One obvious idea would be to release it as an app–it clearly already works on the Android platform. Another idea would be to sell the technology to an existing mapping company. Perhaps they’re planning both. Whatever the goal, it seems worth keeping an eye on.

Twitter Must Metamorphose Carefully as It Goes Public

Twitter may make major interface changes to address the growing need to make money.

By Josh Dzieza

Last week Twitter underwent one of the biggest redesigns in its seven-year history, but you’d be forgiven for missing it. Embedded images and video are now displayed automatically in the updates you see, instead of requiring a click to expand and view. Buttons for “retweeting,” “replying,” and “favoriting” tweets were also brought to the surface, cutting in half the number of clicks needed to interact with a tweet.

Which is not to say the changes are insignificant. Indeed, they are a sign of things to come, as Twitter tries to balance its simple appeal and the demands of its users with a growing need to make money.

 

As Twitter nears its IPO, the new presence of images and videos may help woo people currently using InstagramSnapchat, or other rapidly growing social photo-sharing services. They certainly make Twitter more appealing to advertisers: previously users would have to click on a promoted tweet to see an image; now it’s in your face. (After the update, some people joked that Twitter just launched banner ads.) The newly prominent social buttons will also encourage more interaction, making it easier for Twitter’s many lurkers to engage, and lowering the threshold for tweets to go viral.

Twitter has been signaling for some time that a more radical redesign is imminent. As a soon-to-be publicly traded company, it needs to increase users, and one way to do that would be to find a way of diminishing the number of people who sign up for Twitter, can’t figure out what to do with it, and never come back. Last month a Reuters/Ipsos poll found that 36 percent of people who joined Twitter say they don’t use it, citing a lack of friends on Twitter, and confusion over how to use it and what it was for, among other reasons. In comparison, only 7 percent of Facebook members say they don’t use the site after signing up. Possibly the rumored television stream—a separate column for people discussing television shows, and for broadcasters to promote shows, and for companies to place ads across both screens—could serve this purpose without disrupting the main feed. And possibly Twitter could continue its attempts to recommend content it thinks people will be interested in, carrying on the work of the neglected Discover tab—a personalized stream of top stories and tweets that will reportedly be cut—in some other form.

But the Discover column’s neglect indicates an important challenge for Twitter. Its users are reluctant to take too much heavy guidance, and they have the perfect venue for venting their displeasure if they disagree with changes. Even the minor addition of blue lines to sort Twitter conversations into groups elicited a backlash, though that appears to have faded. Twitter is also driven disproportionately by the activity of a small coterie of power users, some of whom have several million followers.

Twitter’s light touch with redesigns shows that it knows this. The challenge will be keeping this in mind going into the IPO—as pressure to make money inevitably increases.

In contrast to Twitter, Facebook has undergone major overhauls of its user interface several times, each of them usually accompanied by howls of outrage and petitions (on Facebook) to roll them back. Twitter looks extremely similar to when it launched in 2006. Many of Twitter’s redesigns amounted to adjusting its interface and features to better accommodate things its users are already doing, rather than foisting new features upon them. Some of Twitter’s most iconic features, like the hashtag and retweet, were first created by users before Twitter built them into the architecture of the site.

“Facebook tends to build what they want for their users rather than listening to users and building what they want,” says Brian Blau, an analyst who covers Twitter for Gartner—“not that one is good or bad.” He attributes the difference partly to the two sites’ different goals: “Facebook has much broader ambitions, to connect the world, and when you say that you can think about different ways of connecting people—the wall, timeline, news feed. You can change the user interface, and people may not like it, but they like being on Facebook so they tolerate it, and now they don’t remember.” Facebook, it’s worth pointing out, is more embedded in users’ real-world social lives, making it harder to quit or ignore.

Twitter, he says, has stayed very focused on a single pillar: real-time, short-form communication. It has kept its focus even though Twitter’s original constraint, the 140-character limit, was a limit imposed by the SMS texting the site originally used, and no longer applies.

“Twitter’s beauty is its simplicity and its creativity is its constraint, 140 characters,” says S. Shyam Sundar, the founder of the Media Effects Research Laboratory at Penn State. When your form is your function, Sundar says, it creates certain constraints when it comes to redesigns. You can add videos and images and shortened links to tweets, but if you touch the format of short messages presented in a reverse-chronological stream, Twitter won’t be Twitter.

So far, when Twitter has made design tweaks, it has tended toward giving users greater latitude in how they use the site rather than directing them how to use it (as Facebook might do). When the first Twitter users signed on, the site prompted them with the question, “What are you doing?” As Twitter moved from a microblogging platform often mocked for its mundanity to a place where people posted about news and events, that injunction was swapped out for the more open ended, “What’s happening?” Today it’s simply, “Compose a new tweet.”

As people started using Twitter as a way to share and discover hyperlinks to interesting content as much as a blogging platform, Twitter accommodated them, developing its own URL shortening service. After images became one of Twitter’s major functions—the twitpic of the plane that crash-landed in the Hudson River was a turning point—it decided to host its own photos. Even the “trending topic” chart in the margin, a major new feature in 2009, simply gave a more prominent location to information about what was already happening on Twitter.

So far Twitter has stayed remarkably dedicated to its original interface, taking a hands-off approach to how its 230 million users want to use it. But it will soon have another powerful bunch of people—investors—who also want to be heard.

French online start-up Criteo shares pop in market debut

By Leila Abboud and Jennifer Saba.

(Reuters) – Shares in French online advertising firm Criteo rose more than 30 percent in its stock market debut on Nasdaq on Wednesday, showing investor appetite for technology start-ups and delivering a payday to its venture capital backers.

Shares in the company, which uses tracking technology to target ads at consumers surfing the web, opened at $31 and were at $41.40 by 1625 GMT, giving the eight-year old start-up a market capitalization of roughly $2.3 billion.

The sale of 8.08 million shares raised $250 million for the Paris-based company that will be used to fuel its international expansion and growth.

The size of the sale and the initial price were raised twice because of investor demand.

The success of Criteo’s share sale is a sign of investor interest in technology listings against the backdrop of a broader rally of the S&P 500 information technology index and just weeks before the much-anticipated market debut of social network Twitter.

Criteo is one of a number of companies, including Google and Facebook, to benefit from the on-line ad boom, the result of major companies following their audience to the web and away from newspapers and magazines.

Founded in Paris by Jean-Baptiste Rudelle in 2005, the start-up became a darling among online advertisers by boosting the rate at which Internet surfers click on display ads.

The company developed a technology known as “re-targeting” which catches users who have visited a shopping website without buying anything, and then showing them ads for similar items on other sites to tempt them back.

Criteo’s customers, including travel website Hotels.com, telecom operator Orange, and retailer Macy’s, only pay when a web surfer actually clicks on the ad.

In a rare move among French start-up founders, Rudelle moved to Silicon Valley to expand the company that is in 37 countries.

“The U.S. is our number one market today, and a very strategic market for us,” said Rudelle, explaining the choice of listing in New York instead of Paris.

“Being listed on the Nasdaq says that we are here to stay and committed to our clients and partners.”

Criteo has roughly doubled its revenues every year since 2010 to reach 271.9 million euros in 2012. It made a profit of 800,000 euros last year but swung to a loss of 4.9 million in the first six months of 2013 because of increased investments.

There have been 26 U.S. technology listings this year, according to Thomson Reuters data, compared with 30 in 2012.

The sale could herald a pay-day for venture capital firms, which have ploughed some $64 million into Criteo.

Geneva-based Index Ventures was the largest shareholder with a 23.4 percent stake before the share sale. Others include Idinvest Partners with 22.6 percent, Elaia Partners with 13.5 percent and Bessemer Venture Partners with 9.5 percent.

All the funds will be selling relatively small portions of their stakes in the listing, according to the offer documents.

Rudelle will own 8.4-8.6 percent of the group.

JP Morgan, Deutsche Bank Securities and Jefferies are the lead underwriters for the issue.

The Clever Circuit That Doubles Bandwidth

A Stanford startup’s new radio can send and receive information on the same frequency—an advance that could double the speed of  wireless networks.

By David Talbot

A startup spun out of Stanford says it has solved an age-old problem in radio communications with a new circuit and algorithm that allow data to be sent and received on the same radio frequency—thus doubling wireless capacity, at least in theory.

The company, Kumu Networks, has demonstrated the feat in a prototype and says it has agreed to run trials of the technology with unspecified major wireless carriers early next year.

The underlying technology, known as full-duplex radio, tackles a problem known as “self-interference.” As radios send and receive signals, the ones they send are billions of times stronger than the ones they receive. Any attempt to receive data on any given frequency is thwarted by the fact that the radio’s receiver is also picking up its own outgoing signal.

For this reason, most radios—including the ones in your smartphone, the base stations serving them, and Wi-Fi routers—send information out on one frequency and receive on another, or use the same frequency but rapidly toggle back and forth. Because of this inefficiency, radios use more wireless spectrum than is necessary.

To solve this, Kumu built an extremely fast circuit that can predict, moment by moment, how much interference a radio’s transmitter is about to create, and then generates a compensatory signal to cancel it out. The circuit generates a new signal with each packet of data sent, making it possible to work even in mobile devices, where the process of canceling signals is more complex because the objects they bounce off are constantly changing. “This was considered impossible to do for the past 100 years,” says Sachin Katti, assistant professor of electrical engineering and computer science at Stanford, and Kumu’s chief executive and cofounder.

Other companies, including satellite modem maker Comtech, previously used self-cancellation to boost bandwidth on satellite communications. But the Stanford team is the first to demonstrate it in the radios used in networks such as LTE and Wi-Fi, which required cancelling signals that are five orders of magnitude stronger. (More details can be found in this paper.)

Jeff Reed, director of the wireless research center at Virginia Tech, says the new radio rig appears to be a major advance, but he’s awaiting real-world results. “If their claims are true, those are some very impressive numbers,” Reed says. “It requires very precise timing to pull this off.”

This full-duplex technology isn’t the only trick that can seemingly pull new wireless capacity out of thin air. New ways of encoding data stand the chance of making wireless networks as much as 10 times more efficient in some cases (see “A Bandwidth Breakthrough”). Various research efforts are honing new ultrafast sensing and switching tricks to change frequencies on the fly, thus making far better use of available spectrum (see “Frequency Hopping Radio Wastes Less Spectrum”). And emerging software tools allow rapid reconfiguration of wired and wireless networks, creating new efficiencies (see “TR10: Software-Defined Networking”). “A lot of the spectrum is massively underutilized, and this is one of the tools to throw in there to make better use of spectrum,” says Muriel Medard, a professor at MIT’s Research Laboratory of Electronics, and a leader in the field of network coding.

 

Kumu’s technology—even if it works perfectly—won’t provide a big benefit in all situations. In cases where most traffic is going in one direction—such as during a video download—full-duplex technology opens up capacity that you don’t actually need, like adding inbound lanes during evening outbound rush-hour traffic. Nonetheless, Katti sees benefits “on every wireless device in existence from cell phones and towers to Wi-Fi to Bluetooth and everything in between.” Kumu Networks has received $10 million from investors, including Khosla Ventures and New Enterprise Associates.

Startup Gets Computers to Read Faces, Seeks Purpose Beyond Ads

A technology for reading emotions on faces can help companies sell candy. Now its creators hope it also can take on bigger problems.

Last year more than 1,000 people in four countries sat down and watched 115 television ads, such as one featuring anthropomorphized M&M candies boogying in a bar. All the while, webcams pointed at their faces and streamed images of their expressions to a server in Waltham, Massachusetts.

In Waltham, an algorithm developed by a startup company called Affectiva performed what is known as facial coding: it tracked the panelists’ raised eyebrows, furrowed brows, smirks, half-smirks, frowns, and smiles. (Watch a video of the technology in action below this story or here.) When this face data was later merged with real-world sales data, it turned out that the facial measurements could be used to predict with 75 percent accuracy whether sales of the advertised products would increase, decrease, or stay the same after the commercials aired. By comparison, surveys of panelists’ feelings about the ads could predict the products’ sales with 70 percent accuracy.

Although this was an incremental improvement statistically, it reflected a milestone in the field of affective computing. While people notoriously have a hard time articulating how they feel, now it is clear that machines can not only read some of their feelings but also go a step farther and predict the statistical likelihood of later behavior.

Given that the market for TV ads in the United States alone exceeds $70 billion, insights from facial coding are “a big deal to business people,” says Rosalind Picard, who heads the affective computing group at MIT’s Media Lab and cofounded the company; she left the company earlier this year but is still an investor.

Even so, facial coding has not yet delivered on the broader, more altruistic visions of its creators. Helping to sell more chocolate is great, but when will facial coding help people with autism read social cues, boost teachers’ ability to see which students are struggling, or make computers empathetic?

Answers may start to come next month, when Affectiva launches a software development kit that will let its platform be used for approved apps. The hope, says Rana el Kaliouby, the company’s chief science officer and the other cofounder (see “Innovators Under 35: Rana el Kaliouby”), is to spread the technology beyond marketing. While she would not name the actual or potential partners, she said that “companies can use our technology for anything from gaming and entertainment to education and learning environments.”

Applications such as educational assistance—informing teachers when students are confused, or helping autistic kids read emotions on other people’s faces—figured strongly in the company’s conception. Affectiva, which launched four years ago and now has 35 employees and $20 million in venture funding, grew out of the Picard lab’s manifesto declaring that computers would do society a service if they could recognize and react to human emotions.

Over the years, the lab mocked up prototype technologies. These included a pressure-sensing mouse that could feel when your hand clenched in agitation; a robot called Kismet that could smile and raise its eyebrows; the “Galvactivator,” a skin conductivity sensor to measure heartbeat and sweating; and the facial coding system, developed and refined by el Kaliouby.

Affectiva bet on two initial products: a wrist-worn gadget called the Q sensor that could measure skin conductance, temperature, and activity levels (which can be indicators of stress, anxiety, sleep problems, seizures, and some other medical conditions); and Affdex, the facial coding software. But while the Q sensor seemed to show early promise (see “Wrist Sensor Tells You How Stressed Out You Are” and “Sensor Detects Emotions through the Skin”), in April the company discontinued the product, seeing little potential market beyond researchers working on applications such as measuring physiological signs that presage seizures. That leaves the company with Affdex, which is mainly being used by market research companies, including Insight Express and Millward Brown, and consumer product companies like Unilever and Mars.

 

Now, as the company preps its development kit, the market research work may provide an indirect payoff. After spending three years convening webcam-based panels around the world, Affectiva has amassed a database of more than one billion facial reactions. The accuracy of the system could pave the way for applications that read the emotions on people’s faces using ordinary home computers and portable devices. “Affectiva is tackling a hugely difficult problem, facial expression analysis in difficult and unconstrained environments, that a large portion of the academic community has been avoiding,” says Tadas Baltrusaitis, a doctoral student at the University of Cambridge, who has written several papers on facial coding.

What’s more, by using panelists from 52 countries, Affectiva has been teasing out lessons specific to gender, culture, and topic. Facial coding has particular value when people are unwilling to self-report their feelings. For example, el Kaliouby says, when Indian women were shown an ad for skin lotion, every one of them smiled when a husband touched his wife’s midriff—but none of the women would later acknowledge or mention that scene, much less admit to having enjoyed it.

Education may be ripe for the technology. A host of studies have shown the potential; one by researchers at the University of California, San Diego—who have founded a competing startup called Emotient —showed that facial expressions predicted the perceived difficulty of a video lecture and the student’s preferred viewing speed. Another showed that facial coding could measure student engagement during an iPad-based tutoring session, and that these measures of engagement, in turn, predicted how the students would later perform on tests.

Such technologies may be particularly helpful to students with learning disabilities, says Winslow Burleson, an assistant professor at Arizona State University, author of a paper describing these potential uses of facial coding and other technologies. Similarly, the technology could help clinicians tell whether a patient understands instructions. Or it could improve computer games by detecting player emotions and using that feedback to change the game or enhance a virtual character.

Taken together, the insights from many such studies suggest a role for Affdex in online classrooms, says Picard. “In a real classroom you have a sense of whether the students are actively attentive,” she says. “As you go to online learning, you don’t even know if they are there. Now you can measure not just whether they are present and attentive, but if you are speaking—if you crack a joke, do they smile or smirk?”

Nonetheless, Baltrusaitis says many questions remain about which emotional states in students are relevant, and what should be done when those states are detected. “I think the field will need to develop a bit further before we see this being rolled out in classrooms or online courses,” he says.

The coming year should reveal a great deal about whether facial coding can have benefits beyond TV commercials. Affdex faces competition from other apps and startups, and even some marketers remain skeptical that facial coding is better than traditional methods of testing ads. Not all reactions are expressed on the face, and many other measurement tools claim to read people’s emotions, says Ilya Vedrashko, who heads a consumer intelligence research group at Hill Holliday, an ad agency in Boston.

Yet with every new face, the technology gets stronger. That’s why el Kaliouby believes it is poised to take on bigger problems. “We want to make facial coding technology ubiquitous,” she says.

AI Startup Says It Has Defeated Captchas

Brain-mimicking software can reliably solve a test meant to separate humans from machines.

Captchas, those hard-to-read jumbles of letters and numbers that many websites use to foil spammers and automated bots, aren’t necessarily impossible for computers to handle. An artificial-intelligence company called Vicarious says its technology can solve numerous types of Captchas more than 90 percent of the time.

It’s not the first time that computer scientists have managed to fool this method of separating man from machine. But Vicarious says its technique is more reliable and more useful than others because it doesn’t require mountains of training data for it to recognize letters and numbers consistently. Nor does it take a lot of computing power. Vicarious does it with a visual perception system that can mimic the brain’s ability to process visual information and recognize objects.

 

The purposes go well beyond Captchas: Vicarious hopes to eventually sell systems that can easily extract text and numbers from images (such as in Google’s Street View maps), diagnose diseases by checking out medical images, or let you know how many calories you’re about to eat by looking at your lunch. “Anything people do with their eyes right now is something we aim to be able to automate,” says cofounder D. Scott Phoenix.

Vicarious expands on an old idea of using an artificial neural network that is modeled on the brain and builds connections between artificial neurons (see “10 Breakthrough Technologies: Deep Learning”). One big difference in Vicarious’s approach, says cofounder Dileep George, is that its system can be trained with moving images rather than only static ones.

Vicarious set its cognition algorithms to work on solving Captchas as a way of testing its approach. After training its system to recognize numbers and letters, it could solve Captchas from PayPal, Yahoo, Google, and other online services. The company says its average accuracy rate ranges from 90 to 99 percent, depending on the type of Captcha (for example, some feature characters arranged within a grid of rectangles, while others might have characters in front of a wavy background). The system performed best with Captchas composed of letters that look like they’re made out of fingerprints.

“Captcha” stands for “completely automated public Turing test to tell computers and humans apart.” They were created in 2000 by researchers at Carnegie Mellon University and are solved by millions of Web users daily.

That’s not about to change: Vicarious isn’t going to release its system publicly. And besides, as Luis von Ahn, one of the creators of the Captcha, points out, many people have shown evidence of computerized Captcha-solving over the years. Von Ahn even helpfully passed along a link to a list of such instances.

With Firefox OS, an $80 Smartphone Tries to Prove Its Worth

Despite limitations, the Firefox OS-running ZTE Open shows promise for low-cost smartphones.

While the word “smartphone” usually evokes images of pricey iPhones and Android handsets, plenty of inexpensive smartphones are also hitting the market—ripe for the millions of cell phone owners who want a smartphone, but can’t (or don’t want to) pay hundreds of dollars for one.

For Mozilla, which makes the popular Firefox Web browser, this looks like the most promising target market for its recently released Firefox OS, an open-source, largely Web-based mobile operating system intended to run on lower-cost smartphones. The first phones running the OS began selling this summer in several markets around the world.

The company is taking on an audacious challenge, going up against established operating systems like Google’s Android, as well as a slew of less well-known mobile operating systems. And if it wants to succeed, Mozilla has to ensure that those making Firefox OS-running phones—which include ZTE and LG—build products that consumers actually want to use, regardless of how much less they cost than many others on the market.

Curious to see how Mozilla’s efforts are playing out, I decided to check out one of these phones just after the release of a significant update to the Firefox OS this month: the ZTE Open ($80, unlocked, and available in the U.S. on eBay), which the Chinese smartphone maker undoubtedly sees as a way to grow sales by offering an inexpensive handset that uses an alternative OS. I tried to test it while keeping in mind how I might feel if this were not only my first smartphone, but also my first computer, which will undoubtedly be the case for some buyers.

My initial verdict? The Firefox OS is off to a good start, and for $80, the ZTE Open is an okay handset. With many improvements over time—some of which will presumably come from the developer community, which Mozilla hopes will build a slew of Web-based apps for the platform—the OS and smartphones like the ZTE Open could be an excellent choice for those who want basic smartphone capabilities but are not going to pay for a high-end handset.

The handset’s price tag is quite lower than some similar devices. Buying the ZTE Open through Telefonica’s Movistar in Spain, for example, cost 49 euros (about $68) when I last checked; you’d have to pay more than twice that—116 euros, or about $160—for the next cheapest available prepaid smartphone, a Sony Xperia E that runs Android and has similar specifications. Through Movistar in Colombia, the device costs about $80 (U.S.), while a Samsung Galaxy Young Android smartphone costs about $158.

That low price shows in various ways. The first thing you may notice is that the ZTE Open could use some help in the fashion department. It looks a lot more like a smartphone from a couple of years ago than the hottest new handset. It’s squat and chunky, with a soft-feeling plastic back and display frame in pearly Firefox orange (that said, it feels good and solid in your hand, and I wasn’t afraid it would break if I dropped it). Its face is dominated by a touch screen that measures 3.5 inches at the diagonal, with a capacitive “home” button centered below it.

The Firefox OS is extremely intuitive and easy to find your way around, clearly taking many cues from iOS and Android. When you unlock the phone, you see a row of rounded app icons at the bottom of the screen for easy access to top functions (such as making calls, sending messages, and opening the Firefox Web browser—you can change these to suit your habits). There’s a swipe-down notification screen that also gives easy access to wireless and other device settings, and a Marketplace app that allows you to download Web apps (apps built using Web technologies like HTML5) from some big names including Facebook, Twitter, and YouTube.

Perhaps the most interesting thing about the Firefox OS is the way it tries to blur the divide between native and Web software. Atop the phone’s main home screen is a handy search bar; whatever you type in there will bring up results both on and off the phone. Search for “dinner,” for example, and you’ll get a list of round icons corresponding with dinner-related apps already installed on your device, as well as recipe- and dining-related websites shown as though they, too, were apps. Click on a result, such as Yelp, and it will automatically search for restaurants serving dinner near you. You can add this specific Yelp search to your home screen for future easy access. This is a clever way around the problem of a lack of apps (there is no Firefox OS app), but you’re really just adding a link to the Yelp mobile site to your phone in the form of a rounded icon. The fact that apps are built using Web technologies—such as HTML, CSS, and JavaScript—may also entice Web developers to build their first mobile apps.

There are a number of apps included on the handset, such as Nokia’s Here Maps, Facebook, and AccuWeather. And while some, like Here Maps, felt like self-contained app experiences, others (like the New York Times app) looked more like mobile websites. There also didn’t seem to be any notifications for any of the apps I had on the phone (there was an option in the phone’s settings to show notifications on the phone’s lock screen, which I enabled, so presumably Mozilla is still working on that).

The handset hardware is very low, but not entirely no-frills, with a three-megapixel rear camera, rear speaker, and Bluetooth, as well as the ability to function as a Wi-Fi hotspot. There’s hardly any storage space on the phone itself; if you want to take and use photos, videos, and music, you’ll need to pop in a microSD memory card.

One of the phone’s biggest issues seems to be its speed (or lack thereof), which is limited in part by its processor and memory (a one-gigahertz Qualcomm CPU and 256 megabytes of RAM—the same as the Galaxy Young but a measly amount compared to the latest iPhones and high-end Android handsets) and wireless network capabilities (2G and 3G, not LTE). In the U.S., you’ll need to use it with either T-Mobile’s or AT&T’s network; I tested it on T-Mobile’s network and found it somewhat pokey, especially when loading media-heavy Web pages, but generally okay. This is expected, given the low price, but I’m hopeful that improvements to the OS can help in the near term. In fact, I already noticed a bit of a speed difference between using a phone running the most recent Firefox OS and the last version, which is a good sign.

I also had problems with its touch capabilities, which often seemed unwilling to do what I wanted. Tapping app icons and virtual buttons often took several tries, as did tapping a field to enter text (such as a username and password or a URL). Numerous times I swiped right or left to move between the phone’s virtual home screens without seeing a change, or thought I was tapping one button and somehow hit another, which was annoying. It’s irritating, but hopefully the abundance of touch screens in mobile devices and improvements in the technology will soon make it affordable to add better touch screens to low-end phones, too.

 

The display itself isn’t great either, with 480 x 320 pixel resolution, which gives videos and still images a washed-out, less-than-sharp appearance. But it’s good enough for watching some YouTube clips and basic Web surfing, social networking, and messaging, as well as doing some simple photo editing (you get a few built-in options like filters, though nothing fancy).

Phone calls sounded decent, but somewhat fuzzy, and I am a bit concerned about the battery life, which I was able to run down to 50 percent in about three and a half hours of heavy usage.

Since phones running the Firefox OS are heavily Web-dependent—the search feature, for instance, customizes its results and backgrounds with the aid of the Internet—I’m also worried about how they will function in the absence of reliable networks. Even with a strong Wi-Fi network and access to a fairly dependable T-Mobile 3G network, the phone was prone to stuttering on the Web and having trouble loading pages or conducting searches in the included Nokia Here Maps app. This could be a big problem in less-developed areas, where wireless networks and Wi-Fi hotspots are less abundant and functional, and could make users extremely frustrated.

Presumably, as with the other shortcomings, Firefox has this in mind as it moves forward with its OS development. There’s still a lot of work to be done, but I’m excited to see the results.

New Gene Therapy Company Launches

 

Spark Therapeutics hopes to commercialize multiple gene-based treatments developed at the Children’s Hospital of Philadelphia.

 A new biotechnology company will take over human trials of two gene therapies that could offer one-time treatments for a form of childhood blindness and hemophilia B.
The gene therapies were developed by researchers at the Children’s Hospital of Philadelphia, which has committed $50 million to the new company called Spark Therapeutics. The launch is the latest hint that after decades of research and some early setbacks, gene therapy may be on its way to realizing its potential as a powerful treatment for inherited disease.

In December 2012, the European Union gave permission to Dutch company Uniqure to sell its gene therapy for a fat-processing disorder, making Glybera the first gene therapy to make its way into a Western market (see “Gene Therapy on the Mend as Treatment Gets Western Approval”). However, Glybera has not been approved by the U.S., nor has any other gene therapy.

Spark has a chance to be the first gene-therapy company to see FDA approval. Results for a late-stage trial of a gene therapy for Leber’s Congenital Amaurosis, an inherited condition that leads to a loss of vision and eventually blindness, are expected by mid-2015. That treatment is one of several gene therapies in or nearing late-stage testing contending to be the first gene therapy approved by the FDA for sale in the U.S. (see “When Will Gene Therapy Come to the U.S.”).

In addition to taking the reins for two-ongoing human trials, Spark will also work on gene therapies for other eye and blood conditions as well as neurodegenerative diseases, says CEO Jeff Marrazzo.  The gene therapy technology developed at the Children’s Hospital has been “speeding down the tracks,” he says, and the company will provide the “vehicle to get these therapies to the people who need them.”

 

Flame-Shaping Electric Fields Could Make Power Plants Cleaner

 

ClearSign’s pollution-reducing technology could help power plants burn less fuel and make more money.

By Kevin Bullis on October 23, 2013

A Seattle company called ClearSign Combustion has developed a trick that it says could nearly eliminate key pollutants from power plants and refineries, and make such installations much more efficient. The technique involves electric fields to control the combustion of fuel by manipulating the shape and brightness of flames.

The technology could offer a cheaper way to reduce pollution in poor countries. And because ClearSign’s approach to reducing pollution also reduces the amount of fuel a power plant consumes, it can pay for itself, the company says. The need for better pollution controls is clear now in China, where hazardous pollution has been shutting down schools and roads this week.

The company claims that its technology could reduce fuel consumption by as much as 30 percent. Some outside experts say that in practice the likely improvement would be far less, possibly only a few percent, although even that would still result in large savings.

Much of the pollution from a power plant is the result of problems with combustion. If parts of a flame get too hot, it can lead to the formation of nitrogen oxides, which contribute to smog. Similarly, incomplete burning, which can result from the poor mixing of fuel and air, can form soot (see “Cheaper, Cleaner Combustion”).

ClearSign uses high-voltage electric fields to manipulate the electrically charged molecules in a combustion flame. This can improve the way air and fuel mix together, and can spread out a flame to prevent hot spots that cause pollution.

The idea of using electricity to shape flames has been around for decades. But conventional approaches typically involve plasma, and the plasma needs large amounts of energy. ClearSign says its technology only uses one-tenth of 1 percent of the energy in the fuel that a power plant consumes. It works using electrodes within the flame. The electrode produces high voltages that influence the movement of ions; by varying the voltage, it’s possible to control the way the flame forms. The technology is particularly effective at reducing smog-forming NOx emissions, carbon monoxide, and soot.

“There’s been interest in electric fields for some time, but nothing with as strong an effect as they’ve demonstrated,” says Michael Frenklach, a professor of mechanical engineering at the University of California, Berkeley.

In addition to reducing pollution, the technology can improve the efficiency of a power plant or a refinery in several ways. Improved mixing of fuel and air means less fuel is wasted by incomplete combustion; the technology can also improve heat transfer from the flame to the water in a boiler, so less fuel is needed to make steam, which is used to drive turbines in a power plant. But the biggest potential for fuel savings could be in reducing or eliminating the need for conventional pollution controls, which can consume significant amounts of energy, and can be expensive.