With Firefox OS, an $80 Smartphone Tries to Prove Its Worth

Despite limitations, the Firefox OS-running ZTE Open shows promise for low-cost smartphones.

While the word “smartphone” usually evokes images of pricey iPhones and Android handsets, plenty of inexpensive smartphones are also hitting the market—ripe for the millions of cell phone owners who want a smartphone, but can’t (or don’t want to) pay hundreds of dollars for one.

For Mozilla, which makes the popular Firefox Web browser, this looks like the most promising target market for its recently released Firefox OS, an open-source, largely Web-based mobile operating system intended to run on lower-cost smartphones. The first phones running the OS began selling this summer in several markets around the world.

The company is taking on an audacious challenge, going up against established operating systems like Google’s Android, as well as a slew of less well-known mobile operating systems. And if it wants to succeed, Mozilla has to ensure that those making Firefox OS-running phones—which include ZTE and LG—build products that consumers actually want to use, regardless of how much less they cost than many others on the market.

Curious to see how Mozilla’s efforts are playing out, I decided to check out one of these phones just after the release of a significant update to the Firefox OS this month: the ZTE Open ($80, unlocked, and available in the U.S. on eBay), which the Chinese smartphone maker undoubtedly sees as a way to grow sales by offering an inexpensive handset that uses an alternative OS. I tried to test it while keeping in mind how I might feel if this were not only my first smartphone, but also my first computer, which will undoubtedly be the case for some buyers.

My initial verdict? The Firefox OS is off to a good start, and for $80, the ZTE Open is an okay handset. With many improvements over time—some of which will presumably come from the developer community, which Mozilla hopes will build a slew of Web-based apps for the platform—the OS and smartphones like the ZTE Open could be an excellent choice for those who want basic smartphone capabilities but are not going to pay for a high-end handset.

The handset’s price tag is quite lower than some similar devices. Buying the ZTE Open through Telefonica’s Movistar in Spain, for example, cost 49 euros (about $68) when I last checked; you’d have to pay more than twice that—116 euros, or about $160—for the next cheapest available prepaid smartphone, a Sony Xperia E that runs Android and has similar specifications. Through Movistar in Colombia, the device costs about $80 (U.S.), while a Samsung Galaxy Young Android smartphone costs about $158.

That low price shows in various ways. The first thing you may notice is that the ZTE Open could use some help in the fashion department. It looks a lot more like a smartphone from a couple of years ago than the hottest new handset. It’s squat and chunky, with a soft-feeling plastic back and display frame in pearly Firefox orange (that said, it feels good and solid in your hand, and I wasn’t afraid it would break if I dropped it). Its face is dominated by a touch screen that measures 3.5 inches at the diagonal, with a capacitive “home” button centered below it.

The Firefox OS is extremely intuitive and easy to find your way around, clearly taking many cues from iOS and Android. When you unlock the phone, you see a row of rounded app icons at the bottom of the screen for easy access to top functions (such as making calls, sending messages, and opening the Firefox Web browser—you can change these to suit your habits). There’s a swipe-down notification screen that also gives easy access to wireless and other device settings, and a Marketplace app that allows you to download Web apps (apps built using Web technologies like HTML5) from some big names including Facebook, Twitter, and YouTube.

Perhaps the most interesting thing about the Firefox OS is the way it tries to blur the divide between native and Web software. Atop the phone’s main home screen is a handy search bar; whatever you type in there will bring up results both on and off the phone. Search for “dinner,” for example, and you’ll get a list of round icons corresponding with dinner-related apps already installed on your device, as well as recipe- and dining-related websites shown as though they, too, were apps. Click on a result, such as Yelp, and it will automatically search for restaurants serving dinner near you. You can add this specific Yelp search to your home screen for future easy access. This is a clever way around the problem of a lack of apps (there is no Firefox OS app), but you’re really just adding a link to the Yelp mobile site to your phone in the form of a rounded icon. The fact that apps are built using Web technologies—such as HTML, CSS, and JavaScript—may also entice Web developers to build their first mobile apps.

There are a number of apps included on the handset, such as Nokia’s Here Maps, Facebook, and AccuWeather. And while some, like Here Maps, felt like self-contained app experiences, others (like the New York Times app) looked more like mobile websites. There also didn’t seem to be any notifications for any of the apps I had on the phone (there was an option in the phone’s settings to show notifications on the phone’s lock screen, which I enabled, so presumably Mozilla is still working on that).

The handset hardware is very low, but not entirely no-frills, with a three-megapixel rear camera, rear speaker, and Bluetooth, as well as the ability to function as a Wi-Fi hotspot. There’s hardly any storage space on the phone itself; if you want to take and use photos, videos, and music, you’ll need to pop in a microSD memory card.

One of the phone’s biggest issues seems to be its speed (or lack thereof), which is limited in part by its processor and memory (a one-gigahertz Qualcomm CPU and 256 megabytes of RAM—the same as the Galaxy Young but a measly amount compared to the latest iPhones and high-end Android handsets) and wireless network capabilities (2G and 3G, not LTE). In the U.S., you’ll need to use it with either T-Mobile’s or AT&T’s network; I tested it on T-Mobile’s network and found it somewhat pokey, especially when loading media-heavy Web pages, but generally okay. This is expected, given the low price, but I’m hopeful that improvements to the OS can help in the near term. In fact, I already noticed a bit of a speed difference between using a phone running the most recent Firefox OS and the last version, which is a good sign.

I also had problems with its touch capabilities, which often seemed unwilling to do what I wanted. Tapping app icons and virtual buttons often took several tries, as did tapping a field to enter text (such as a username and password or a URL). Numerous times I swiped right or left to move between the phone’s virtual home screens without seeing a change, or thought I was tapping one button and somehow hit another, which was annoying. It’s irritating, but hopefully the abundance of touch screens in mobile devices and improvements in the technology will soon make it affordable to add better touch screens to low-end phones, too.

 

The display itself isn’t great either, with 480 x 320 pixel resolution, which gives videos and still images a washed-out, less-than-sharp appearance. But it’s good enough for watching some YouTube clips and basic Web surfing, social networking, and messaging, as well as doing some simple photo editing (you get a few built-in options like filters, though nothing fancy).

Phone calls sounded decent, but somewhat fuzzy, and I am a bit concerned about the battery life, which I was able to run down to 50 percent in about three and a half hours of heavy usage.

Since phones running the Firefox OS are heavily Web-dependent—the search feature, for instance, customizes its results and backgrounds with the aid of the Internet—I’m also worried about how they will function in the absence of reliable networks. Even with a strong Wi-Fi network and access to a fairly dependable T-Mobile 3G network, the phone was prone to stuttering on the Web and having trouble loading pages or conducting searches in the included Nokia Here Maps app. This could be a big problem in less-developed areas, where wireless networks and Wi-Fi hotspots are less abundant and functional, and could make users extremely frustrated.

Presumably, as with the other shortcomings, Firefox has this in mind as it moves forward with its OS development. There’s still a lot of work to be done, but I’m excited to see the results.

New Gene Therapy Company Launches

 

Spark Therapeutics hopes to commercialize multiple gene-based treatments developed at the Children’s Hospital of Philadelphia.

 A new biotechnology company will take over human trials of two gene therapies that could offer one-time treatments for a form of childhood blindness and hemophilia B.
The gene therapies were developed by researchers at the Children’s Hospital of Philadelphia, which has committed $50 million to the new company called Spark Therapeutics. The launch is the latest hint that after decades of research and some early setbacks, gene therapy may be on its way to realizing its potential as a powerful treatment for inherited disease.

In December 2012, the European Union gave permission to Dutch company Uniqure to sell its gene therapy for a fat-processing disorder, making Glybera the first gene therapy to make its way into a Western market (see “Gene Therapy on the Mend as Treatment Gets Western Approval”). However, Glybera has not been approved by the U.S., nor has any other gene therapy.

Spark has a chance to be the first gene-therapy company to see FDA approval. Results for a late-stage trial of a gene therapy for Leber’s Congenital Amaurosis, an inherited condition that leads to a loss of vision and eventually blindness, are expected by mid-2015. That treatment is one of several gene therapies in or nearing late-stage testing contending to be the first gene therapy approved by the FDA for sale in the U.S. (see “When Will Gene Therapy Come to the U.S.”).

In addition to taking the reins for two-ongoing human trials, Spark will also work on gene therapies for other eye and blood conditions as well as neurodegenerative diseases, says CEO Jeff Marrazzo.  The gene therapy technology developed at the Children’s Hospital has been “speeding down the tracks,” he says, and the company will provide the “vehicle to get these therapies to the people who need them.”

 

Flame-Shaping Electric Fields Could Make Power Plants Cleaner

 

ClearSign’s pollution-reducing technology could help power plants burn less fuel and make more money.

By Kevin Bullis on October 23, 2013

A Seattle company called ClearSign Combustion has developed a trick that it says could nearly eliminate key pollutants from power plants and refineries, and make such installations much more efficient. The technique involves electric fields to control the combustion of fuel by manipulating the shape and brightness of flames.

The technology could offer a cheaper way to reduce pollution in poor countries. And because ClearSign’s approach to reducing pollution also reduces the amount of fuel a power plant consumes, it can pay for itself, the company says. The need for better pollution controls is clear now in China, where hazardous pollution has been shutting down schools and roads this week.

The company claims that its technology could reduce fuel consumption by as much as 30 percent. Some outside experts say that in practice the likely improvement would be far less, possibly only a few percent, although even that would still result in large savings.

Much of the pollution from a power plant is the result of problems with combustion. If parts of a flame get too hot, it can lead to the formation of nitrogen oxides, which contribute to smog. Similarly, incomplete burning, which can result from the poor mixing of fuel and air, can form soot (see “Cheaper, Cleaner Combustion”).

ClearSign uses high-voltage electric fields to manipulate the electrically charged molecules in a combustion flame. This can improve the way air and fuel mix together, and can spread out a flame to prevent hot spots that cause pollution.

The idea of using electricity to shape flames has been around for decades. But conventional approaches typically involve plasma, and the plasma needs large amounts of energy. ClearSign says its technology only uses one-tenth of 1 percent of the energy in the fuel that a power plant consumes. It works using electrodes within the flame. The electrode produces high voltages that influence the movement of ions; by varying the voltage, it’s possible to control the way the flame forms. The technology is particularly effective at reducing smog-forming NOx emissions, carbon monoxide, and soot.

“There’s been interest in electric fields for some time, but nothing with as strong an effect as they’ve demonstrated,” says Michael Frenklach, a professor of mechanical engineering at the University of California, Berkeley.

In addition to reducing pollution, the technology can improve the efficiency of a power plant or a refinery in several ways. Improved mixing of fuel and air means less fuel is wasted by incomplete combustion; the technology can also improve heat transfer from the flame to the water in a boiler, so less fuel is needed to make steam, which is used to drive turbines in a power plant. But the biggest potential for fuel savings could be in reducing or eliminating the need for conventional pollution controls, which can consume significant amounts of energy, and can be expensive.

 

A Successful Moon Shot for Laser Communications

 

A test of high-bandwidth optical communications from lunar orbiter to Earth stations succeeds.

 

There was no “Mr. Watson—come here—I want to see you” moment. But a pioneering space-based optical communications test has been a big success. And that means optical systems stand a higher chance not only dominating future space data transmissions (with radio systems serving as a backup) but of enabling new satellite networks that would boost the capacity of the terrestrial Internet.

A planned test of the Lunar Laser Communications Demonstration (see “NASA Moonshot Will Test Laser Communications”) aboard a probe in lunar orbit is working just as planned, delivering download speeds six times faster than the fastest radio system used for moon communications,Don Boroson, the researcher at MIT’s Lincoln Lab who led the project, says, “We have successfully hit all our marks—all the downlink rates up to 622 Mbps [and] our two uplink rates up to 20 Mbps.”

One of the toughest parts of the task: aligning ground telescopes to continually see the incoming infrared laser beam dispatched from a probe whizzing around the moon. This “signal acquisition”  was “fast and reliable,” he added. His team even transmitted high-definition video of “shuttle launches, space station antics, and Earth images,” he said. “Also, some little videos we took of ourselves in the operations center.”

Ground-based detectors were set up in California, New Mexico, and one of the Canary Islands. The big difficulty with sending optical signals through the air is that they can be blocked by clouds. Still, in the future, networks of satellites could transmit data among each other and then to ground stations in various places, giving a bandwidth boost to the ground-based fiber network.

 

A Lifeline for a Cellulosic-Biofuel Company

$100 million in new funding will keep the woodchip-to-gasoline company Kior afloat, for now.

Yesterday Kior, a company that turns wood chips into gasoline and diesel fuel, announced that it had raised $100 million, which should be enough to keep it in business for another year or so and help it build a new biorefinery. The funding is a lifeline for a business that just a couple of months ago looked close to failure. But the company, which operates the largest U.S. refinery for converting cellulosic biomass into fuel (see “Kior ‘Biocrude’ Plant a Step Toward Advanced Biofuels”), is still a long way from being profitable.

Cellulosic biofuels could, at least in theory, reduce oil imports and greenhouse-gas emissions, and the U.S. Congress has required fuel companies to buy billions of gallons of it. But in spite of this mandate, very little is produced. Although dozens of companies have trotted out lab-scale technologies for breaking down recalcitrant biomass and turning it into fuel, they’ve struggled to commercialize these systems, in part because it’s been difficult to raise funds to build large refineries and in part because the methods often fail to perform as well at a large scale as they do in the lab. (For example, one company, Range Fuels, found that its system became clogged up with tar.) As result, the government mandate has repeatedly been waived (see “The Death of Range Fuels Shouldn’t Doom All Biofuels” and “The Cellulosic Industry Faces Big Challenges”).

 

Kior itself has run into technical difficulties that have kept it from running its huge biofuel plant at full scale. The plant is designed to produce 13 million gallons of fuel per year and started producing its first fuel—diesel—in March 2013. The company said it would ship a total of 300,000 to 500,000 gallons by midyear, but it only managed to ship 75,000 gallons. The shortfall in production resulted in lower-than-expected revenue and a loss of $38.5 million in the second quarter, up from $23 million for the same quarter a year before. With little revenue and high costs, some analysts started to worry that the company would run out of money.

The $100 million investment buys the company time, and by some measures it’s making good progress, says Mike Ritzenthaler, a senior research analyst at Piper Jaffray. For example, he notes that production levels are increasing, and the company looks on track to produce a million gallons of fuel by the end of the year. Kior also has the advantage of making gasoline rather than ethanol, the market for which is saturated in the United States.

But big challenges remain. If Kior hopes to break even and eventually turn a profit, it needs the economies of scale that come from even bigger refineries, and building those will require more funding. Funding for cellulosic plants has been particularly hard to come by, since investors are reluctant to take a risk on the new technology.

Anonymity Network Tor Needs a Tune-up to Protect Users from Surveillance

Fixes are planned for Internet anonymity tool Tor after researchers showed that national intelligence agencies could plausibly unmask users.

By Tom Simonite on October 25, 2013

All the same, the Tor Project is trying to develop critical adjustments to how its tool works to strengthen it against potential compromise. Researchers at the U.S. Naval Research Laboratory have discovered that Tor’s design is more vulnerable than previously realized to a kind of attack the NSA or government agencies in other countries might mount to deanonymize people using Tor.

Tor prevents people using the Internet from leaving many of the usual traces that can allow a government or ISP to know which websites or other services they are connecting to. Users of the tool range from people trying to evade corporate firewalls to activists, dissidents, criminals, and U.S. government workers with more sophisticated adversaries to avoid.

When people install the Tor client software, their outgoing and incoming traffic takes an indirect route around the Internet, hopping through a network of “relay” computers run by volunteers around the world. Packets of data hopping through that network are encrypted so that relays know only their previous and next destination (see “Dissent Made Safer”). This means that even if a relay is compromised, the identity of users, and details of their browsing, should not be revealed.

However, new research shows how a government agency could work out the true source and destination of Tor traffic with relative ease. Aaron Johnson of the U.S. Naval Research Laboratory and colleagues found that the network is vulnerable to a type of attack known as traffic analysis.

This type of attack involves observing Internet traffic data going into and out of the Tor network and looking for patterns that reveal the Internet services that a specific Internet connection, and presumably its owner,  is using Tor to access. Johnson and colleagues showed that the method could be very effective for an organization that both contributed relays to the Tor network and could monitor some Internet traffic via ISPs.

“Our analysis shows that 80 percent of all types of users may be deanonymized by a relatively moderate Tor-relay adversary within six months,” the researchers write in a paper on their findings. “These results are somewhat gloomy for the current security of the Tor network.” The work of Johnson and his colleagues will be presented at the ACM Conference on Computer and Communications Security in Berlin next month.

Johnson told MIT Technology Review that people using the Tor network to protect against low-powered adversaries such as corporate firewalls aren’t likely to be affected by the problem. But he thinks people using Tor to evade the attention of national agencies have reason to be concerned. “There are many plausible cases in which someone would be in a position to control an ISP,” says Johnson.

Johnson says that the workings of Tor need to be adjusted to mitigate the problem his research has uncovered. That sentiment is shared by Roger Dingledine, one of Tor’s original developers and the project’s current director (see “TR35: Roger Dingledine”).

“It’s clear from this paper that there *do* exist realistic scenarios where Tor users are at high risk from an adversary watching the nearby Internet infrastructure,” Dingledine wrote in a blog post last week. He notes that someone using Tor to visit a service hosted in the same country—he gives the example of Syria—would be particularly at risk. In that situation traffic correlation would be easy, because authorities could monitor the Internet infrastructure serving both the Tor user and the service he or she is connecting to.

Dingledine is considering changes to the Tor protocol that might help. In the current design, the Tor client selects three entry points into the Tor network and uses them for 30 days before choosing a new set. But each time new “guards” are selected the client runs the risk of choosing one an attacker using traffic analysis can monitor or control. Setting the Tor client to select fewer guards and to change them less often would make traffic correlation attacks less effective. But more research is needed before such a change can be made to Tor’s design.

Whether the NSA or any other country’s national security agency is actively trying to use traffic analysis against Tor is unclear. This month’s reports, based on documents leaked by Edward Snowden, didn’t say whether the NSA was doing so. But a 2007 presentation released by the Guardian and a 2006 NSA research report on Tor released by the Washington Post did mention such techniques.

Stevens Le Blond, a researcher at the Max Planck Institute for Software Systems in Kaiserslautern, Germany, guesses that by now the NSA and equivalent agencies likely could use traffic correlation should they want to. “Since 2006, the academic community has done much work on traffic analysis and has developed attacks that are much more sophisticated than the ones described in this report.” Le Blond calls the potential for attacks like those detailed by Johnson “a big issue.”

Le Blond is working on the design of an alternative anonymity network called Aqua, designed to protect against traffic correlation. Traffic entering and exiting an Aqua network is made to be indistinguishable through a mixture of careful timing, and blending in some fake traffic. However, Aqua’s design is yet to be implemented in usable software and can so far only protect file sharing rather than all types of Internet usage.

In fact, despite its shortcomings, Tor remains essentially the only practical tool available to people that need or want to anonymize their Internet traffic, saysDavid Choffnes, an assistant professor at Northeastern University who helped design Aqua. “The landscape right now for privacy systems is poor because it’s incredibly hard to put out a system that works, and there’s an order of magnitude more work that looks at how to attack these systems than to build new ones.”

 

 

Data Shows Google’s Robot Cars Are Smoother, Safer Drivers Than You or I

Tests of Google’s autonomous vehicles in California and Nevada suggests they already outperform human drivers.

By Tom Simonite on October 25, 2013

Data gathered from Google’s self-driving Prius and Lexus cars shows that they are safer and smoother when steering themselves than when a human takes the wheel, according to the leader of Google’s autonomous-car project.

Chris Urmson made those claims today at a robotics conference in Santa Clara, California. He presented results from two studies of data from the hundreds of thousands of miles Google’s vehicles have logged on public roads in California and Nevada.

One of those analyses showed that when a human was behind the wheel, Google’s cars accelerated and braked significantly more sharply than they did when piloting themselves. Another showed that the cars’ software was much better at maintaining a safe distance from the vehicle ahead than the human drivers were.

“We’re spending less time in near-collision states,” said Urmson. “Our car is driving more smoothly and more safely than our trained professional drivers.”

In addition to painting a rosy picture of his vehicles’ autonomous capabilities, Urmson showed a new dashboard display that his group has developed to help people understand what an autonomous car is doing and when they might want to take over. “Inside the car we’ve gone out of our way to make the human factors work,” he said.

Although that might suggest the company is thinking about how to translate its research project into something used by real motorists, Urmson dodged a question about how that might happen. “We’re thinking about different ways of bringing it to market,” he said. “I can’t tell you any more right now.”

Urmson did say that he is in regular contact with automakers. Many of those companies are independently working on self-driving cars themselves (see “Driverless Cars Are Further Away Than You Think”).

Google has been testing its cars on public roads since 2010 (see “Look, No Hands”), always with a human in the driver’s seat who can take over if necessary.

Urmson dismissed claims that legal and regulatory problems pose a major barrier to cars that are completely autonomous. He pointed out that California, Nevada, and Florida have already adjusted their laws to allow tests of self-driving cars. And existing product liability laws make it clear that a car’s manufacturer would be at fault if the car caused a crash, he said. He also said that when the inevitable accidents do occur, the data autonomous cars collect in order to navigate will provide a powerful and accurate picture of exactly who was responsible.

 

Urmson showed data from a Google car that was rear-ended in traffic by another driver. Examining the car’s annotated map of its surroundings clearly showed that the Google vehicle smoothly halted before being struck by the other vehicle.

“We don’t have to rely on eyewitnesses that can’t act be trusted as to what happened—we actually have the data,” he said. “The guy around us wasn’t paying enough attention. The data will set you free.”

Nasdaq says FINRA caps Facebook IPO claims at $41.6 million

By Sarah N. Lynch

WASHINGTON | Fri Oct 25, 2013 3:23pm EDT

(Reuters) – The total value of the claims that market makers can recover after suffering losses due to Nasdaq OMX Group Inc’s botched handling of Facebook Inc’s initial public offering is $41.6 million, the exchange operator said Friday.

The claims figure, which was calculated by Wall Street’s industry-funded watchdog the Financial Industry Regulatory Authority, falls short of the $62 million that Nasdaq had initially set aside to repay brokerages that lost money.

Nasdaq said the figure is lower in part because some claims did not qualify for compensation under its plan.

The main reason for the lower figure, however, was because one firm opted to try to recover funds through arbitration.

The announcement did not name the brokerage, which was UBS AG.

UBS has pegged its losses from the glitch-ridden IPO at $350 million and was vocal in its decision to file an arbitration demand which claimed Nasdaq had violated a contract agreement.

U.S. District Judge Robert Sweet, however, blocked the bank’s arbitration proceeding over the summer on several grounds, including a determination that the bank’s claims did not fall within the scope of the arbitration provision in their services agreement.

“Nasdaq has demonstrated that the arbitration should be enjoined because it is likely to succeed on the merits and will suffer irreparable harm,” Sweet wrote.

“Given the substantial federal issues posed by UBS claims, the threat of an arbitration panel issuing a decision that may conflict with the decision of a federal court in a parallel litigation also weighs strongly against permitting UBS to proceed with its arbitration proceeding,” he added.

Megan Stinson, a spokeswoman for UBS, told Reuters on Friday that the bank has since appealed the decision to the U.S. Court of Appeals for the Second Circuit. She could not comment further, as the case is currently under seal.

Facebook’s problematic debut on the Nasdaq exchange on May 18, 2012, resulted from a systems failure that prevented the timely delivery of order confirmations and left more than 30,000 Facebook orders stuck in Nasdaq’s system for more than two hours.

Many brokerages were left in the dark wondering if their trades went through. Major market makers estimated they lost collectively up to $500 million in the IPO.

Nasdaq devised a plan to compensate firms up to $62 million, and laid out the criteria for how firms can be eligible to file claims.

The U.S. Securities and Exchange Commission approved the compensation plan in March, and FINRA was put in charge of processing the claims for restitution.

Several months after approving the plan, the SEC in May filed civil charges against Nasdaq, saying the exchange’s “ill-fated decisions” on the day of the Facebook IPO led to a series of regulatory violations.

Nasdaq settled the charges and agreed to pay a $10 million fine.

Wealth managers say they hear ‘nary a tweet’ for Twitter’s IPO

By Lauren Young

NEW YORK | Fri Oct 25, 2013 5:23pm EDT

(Reuters) – Twitter Inc has set a relatively modest price range for its closely watched initial public offering, but some financial advisers say their clients are not clamoring to invest in the social media phenomenon.

“Nary a tweet,” says William Baldwin, president of Pillar Financial Advisors in Waltham, Massachusetts, when asked about client interest in the deal.

Out of 29 broker-dealers and independent advisers contacted by Reuters, 23 said they are not recommending Twitter shares. Only one said he would recommend it – and only to certain clients. Five others said they would wait to snap up the stock if it plunges after it begins to trade on the New York Stock Exchange.

While retail interest might be low, tech industry analysts say there is expected to be a good appetite for Twitter stock from institutional investors at the current valuation. Actual institutional investor sentiment still remains unclear. Retail investors typically account for 10 to 15 percent ofIPOs.

Twitter said on Thursday it will sell 70 million shares at between $17 and $20 apiece, valuing the online messaging company at up to about $11 billion, less than the $15 billion that some analysts had been expecting. If underwriters choose to sell an additional allotment of 10.5 million shares, the IPO could raise as much as $1.6 billion.

Blame last year’s botched Facebook Inc IPO for the diminished interest from Mom and Pop in Twitter.

When the social networking giant’s stock hit the market in May 2012, it encountered allocation problems, trading glitches and a selloff – shares did not recover their IPO price until a year later.

“People are still smarting from that experience,” says René Nourse, a financial adviser at Urban Wealth Management in El Segundo, California. Part of the problem is that investors do not understand Twitter the way they “got” Facebook, Nourse and other advisers say.

NO CALLS ON TWITTER

Three brokers with Morgan Stanley, which was lead underwriter on the Facebook IPO, said clients are showing little or no interest in Twitter.

“With the debacle over Facebook, I haven’t had one client ask about it,” said one of the brokers, based in the southeast. The broker asked not be identified because they were not authorized to speak to the media.

Another broker, based in northern California, said, “Silicon Valley deals have been super-red hot, but I’ve had no inquiries from clients” about Twitter.

All in all, Twitter is no Facebook.

While Twitter relies on advertising like Facebook to make money, it is not profitable.

Twitter also has a smaller, less-engaged audience and it is not issuing as much stock, argues Kile Lewis, co-chief executive and founder of oXYGen Financial, an independent financial advisory firm that focuses on clients in their 30s and 40s, also known as Generation X and Generation Y.

“In spite of the ‘glow’ from most on Wall Street, I find it hard to make a recommendation for a company that is running a…loss,” Lewis says. “Until they have a clear plan to monetize their product it seems too risky.”

Twitter more than doubled its third-quarter revenue to $168.6 million, but net losses widened to $64.6 million in the September quarter, it disclosed in a filing earlier this month.

Since its creator Jack Dorsey sent out the first-ever tweet in March 2006, the micro-blogging platform has grown to more than 200 million regular users posting more than 400 million tweets a day.

Twitter is expected to set a final IPO price on November 6, according to a document reviewed by Reuters, suggesting that the stock could begin trading as early as November 7.

INVESTORS POLL ON PRICE RANGE

For individual investors, however, the pendulum is swinging the other way.

An online poll conducted through Friday morning on Reuters.com found that 57 percent of 225 respondents want to invest in the IPO at the range of $17 to $20, while 28 percent are not interested in the stock. Fifteen percent say they are waiting to buy the shares on the open market.

One cautious investor is Betty Tanguilig, a 75-year-old retiree and mother of eight. Back when Facebook launched, she was furious that her financial adviser Alan Haft, with California-based Kelly Haft Financial, could not get her more than $46,000 worth of shares from a $400,000 account to buy shares of the social networking site.

Now, Tanguilig is taking a more measured approach to the Twitter IPO. Even though her investment in Facebook is up 40 percent, she says she wants to wait and see how Twitter performs before jumping into the stock.

Tanguilig’s hesitance about Twitter is not the result of a lesson learned from the mishaps of the Facebook IPO, but because like many of her peers, Tanguilig does not quite get Twitter.

“I use Facebook, I read what people are doing … but I have never used Twitter,” she said.

“I will give it a week,” she said. “And if it does well, I would put in around $20,000.”

Several independent advisers said it suited their investment styles more to wait and see how Twitter performed after the offering.

“We expect that Twitter will fall in value eventually post-offering,” said Stacy Francis, president and CEO of Francis Financial in New York. “That is the ideal time to buy.”

An adviser at Raymond James said he would also advise certain clients to buy at the post-IPO price if the stock tanks on the first day. The adviser asked not to be identified because they were not authorized to speak to the media.

Betsy Billard, an adviser at Ameriprise Financial with offices in Los Angeles and New York, said most large-company mutual funds will be buyers. “My clients will own it – whether they want it or not,” Billard says.

Google smartwatch reportedly coming ‘sooner rather than later’

By Trevor Mogg

The Pebble seems to be doing OK in the smartwatch space considering its humble beginnings, though we’re not so sure about the recently released Galaxy Gear. It’s pretty pricey, after all, and only pairs with a couple of Samsung devices. There’s Sony’s offering too, though you don’t seem to hear much about that.

What we really want to see is a device so stunning it blows the market wide open, a watch that leaves us reaching down to the ground to pick up our jaw so that we can fit it back in place to verbally express our glee and happiness that an awesome wrist-based gadget has finally made it to market.

Could Google’s smartwatch be the one? According to a 9to5Google report Monday, we may be about to find out.

An unnamed source told the site the watch would be coming “sooner rather than later” – admittedly a vague forecast, certainly unspecific, and definitely not that useful. But it could mean we see something before the holiday season, rather than next year. Or next week rather than the week after. Or today instead of tomorrow. Sure, it really depends on how you look at it, although one date mentioned in the report is October 31.

The source also told the site that Google Now would be very much at the center of the Mountain View company’s high-tech watch – which is apparently going with the codename ‘Gem’ – with users able to ask a question and receive a response on its interface, as well as receive lots of contextual information via Google Now’s info-laden cards.

The company is also said to be working on ways to extend the battery life of its smartwatch – a major challenge for makers of a little device like this – as well as focusing heavily on Bluetooth 4.0 connectivity.

The Google Now integration could certainly work to loosen those jaw joints a little, though what’ll really send it groundward will be an awesomely cool design the likes of which we’ve never seen. Can Google come up with the goods? Hopefully all will be revealed sooner rather than later.