Investors Cheer as Facebook Approaches May 2012 IPO Price

By Jennifer BootonPublished July 25, 2013FOXBusiness
Facebook (FB) experienced its largest-ever rally on Thursday on the heels of much stronger-than-expected earnings as the world’s largest social network capitalized on its rapidly expanding mobile platform and growing advertising revenues.
Shares of the Menlo Park, Calif.-based tech giant operated by Mark Zuckerberg closed up 30%, or $7.84 a share, on Thursday afternoon to a 52-week high of $34.36, bringing the stock closer than ever to its May 2012 IPO price of $38.
With volume surpassing 364 million, Thursday was the heaviest day of Facebook trading since its much criticized IPO, where 581 million shares changed hands.
Leading the gains was its surprising earnings beat in the second quarter, which triggered a slew of price target increases and a few upgrades on Thursday. Evercore (EVR), which upped Facebook’s rating to “overweight,” pointed to its better-than-expected 53% rise in sales.
Investors cheered Facebook’s robust advertising revenue as ad pricing increased 40% in the U.S., a reflection of simpler ad offerings, which Facebook earlier this year pledged to streamline. It also saw stronger market demand for its popular News Feed targeted ads as local advertisers ramped up their spending. Overall, ad revenues grew 61% year-over-year, beating the consensus view of 40%, while mobile ad revenue expanded to 41% of total ad sales.
“The fact that the beat was driven entirely by advertising revenue makes the outperformance even more compelling,” Goldman Sachs (GS) analyst Heather Bellini said in a note. Goldman raised its price target to $46 from $40 and reiterated its “buy’ rating on the stock.
Evercore analyst Ken Sena said the underlying metrics support the view that a positive ad pricing inflection occurred during the quarter, pointing to “further upside” for Facebook shares. The social network sees those numbers improving through the remainder of the year.
Video: Cha-Ching
While challenges linger, notably whether Facebook will be able to maintain the rapid growth of its ad revenue, video, which it introduced in the second quarter through Instagram, represents a potential untapped sweet spot.
Facebook made just one mention of the new Instagram Video service in its earnings report released late Wednesday, reiterating that it saw 5 million video uploads in the first 24 hours of the launch.
However, Spruce Media CEO Rob Jewell said the social network continues to shop around an ad offering that some reports have pegged at as high as $1 million a spot.
“They are very large spots – several hundred thousand up to a million,” he said, citing talks with ad agencies. Facebook has not confirmed that it is shopping around an ad offering for video and would not comment on Wednesday.
Jewell said talks with agencies seem to indicate that Facebook is serious about pushing the offering out. He also said the high price makes sense, as it represents the first equivalent to primetime television ever offered online given Facebook’s massive monthly active user base of 1.5 billion.
“For the first time these agencies used to buying these spots on prime time TV can easily reach that [same audience level] on Facebook,” Jewell said.
Answering a request for comment about video, Facebook pointed to its Instagram for Business blog, where it boasts a series of posts about how businesses are using its service, including at a recent concert that celebrated the 25th anniversary of Essence Magazine.
Instagram also this month began allowing users to embed videos to other platforms online, however both services for now remain free and ad-free for users.
The Bottom Line
Ultimately, Facebook is looking to drive as many ad dollars as possible without losing users.
“The first challenge is to find a way to get people to click on the ads, but ultimately those clicks need to run into revenue,” said Marc Poirier, co-founder of Acquisio.
Customer data from Spruce Media, an ad manager for the Facebook platform, shows a decline in click-through rates (CTR) in the second quarter, or the percentage of clicks advertisers receive per total ad impressions. CTR slumped 30% in three out of four ad placements, while engagement rates across “likes,” “comments” and “shares” declined by an average 47%.
The untapped video market, Jewell said, would be a game changer.

New Hydrogen-Making Method Could Give a Boost to Fuel-Cell Vehicles

The chemical company BASF has found a greener way to make hydrogen, reviving hopes for fuel-cell vehicles.
By Kevin Bullis
Hydrogen-powered vehicles have been pitched as a greener alternative to gas-powered vehicles, but one problem with this is that the hydrogen is typically produced from a fossil fuel—natural gas—in a process that releases a lot of carbon dioxide.

BASF, the world’s largest chemical company, may have a solution. It’s developing a process that could cut those emissions in half, making hydrogen fuel-cell vehicles significantly cleaner than electric vehicles in most locations (the environmental benefits of electric cars vary depending on how the electricity is generated). Beyond providing a cleaner source of hydrogen for fuel-cell vehicles, the process could also help clean up industrial processes, like oil refining, that use large amounts of hydrogen.
BASF is working on a pilot plant to demonstrate the technology as part of a $30 million project partially financed by the German government. A second part of the project will demonstrate a new way to use carbon dioxide emissions as a raw material for chemicals and fuels, by combining them with the hydrogen produced in BASF’s low-carbon emissions process.

Taken together, the systems could create new markets for natural gas, especially in the United States, where fracking has led to a boom in production. A cleaner form of hydrogen could also revive stronger interest in fuel-cell vehicles. A handful of automakers have plans to start selling fuel-cell vehicles as early as 2015 (see “Why Toyota and GM Are Pushing Fuel-Cell Cars to Market”). Conventional hydrogen production involves reacting methane—the main ingredient of natural gas—with oxygen or water. This reaction produces hydrogen gas and, as the carbon reacts with oxygen, carbon dioxide.

Researchers have known for a long time that it’s possible to form hydrogen without introducing oxygen, avoiding carbon dioxide production. At high-enough temperatures, methane forms hydrogen and solid carbon. (The carbon can be used in industrial processes, such as making steel.) But this approach hasn’t proved economical.

That’s partly because generating high temperatures requires a lot of energy, and producing that energy usually involves carbon dioxide emissions, which would offset much of the potential environmental benefit. BASF has found better ways to recycle heat within its system, greatly decreasing the amount of energy needed. “The hydrogen production will be cost-competitive, while at the same time having the added advantage of having a reduced carbon footprint,” says Andreas Bode, the BASF project coordinator.

BASF is working with ThyssenKrupp Steel to use the carbon produced in the process in steel manufacturing.

The second part of the project is using the hydrogen to make useful products from carbon dioxide. In the presence of novel catalysts developed by BASF, hydrogen and carbon dioxide form syngas, a mixture of mostly carbon monoxide and hydrogen. Syngas is used to make methanol and other chemicals and fuels. Using hydrogen produced in a way that produces relatively little carbon dioxide helps keep overall emissions low. The basic reaction has been known for some time, but BASF thinks its new catalysts—the details of which it is keeping to itself—can make it economical. “It’s really a breakthrough,” Bode says.

Though finding such uses for carbon dioxide will do little to dent overall greenhouse gas emissions, the process could be important because it could allow chemical producers to use alternatives to petroleum.

The Internet’s Innovation Hub

GitHub has created a social network where programmers get together and get work done without bosses, e-mails, or meetings.
By Tom Simonite
San Francisco startup GitHub has all the hallmarks of the next big social network. The company’s base of 3.6 million users is growing fast, and after raising $100 million last year, GitHub was worth $750 million, at least on paper.

Yet GitHub is not a place for socializing and sharing photos. It’s a site where software developers store, share, and update their personal coding projects, in computer languages like Java and Python.

“It’s a social network, but it’s different from the others because it’s built around creating valuable things,” says GitHub CEO Tom Preston-Werner, whose company has been called “Facebook for geeks.”

GitHub’s mix of practicality and sociability have made it into a hub for software innovation. People log on from around the globe (78 percent of its users from outside the U.S.) to test and tinker with new ideas for mobile apps or Web server software. For Ethan Mollick, an assistant professor at the Wharton School, GitHub is one of a new class of technology platforms, including the crowdfunding site Kickstarter, that allow innovation without the traditional constraints of geography or of established hierarchies. “Virtual communities have more influence on reality now,” he says.

What all this could mean for software hubs like Washington, D.C., and Silicon Valley isn’t yet clear. Certainly, in the post-GitHub world you no longer have to frequent the right coffee shops and parties in the Bay Area to make a name as a talented coder. Companies get founded on the site, and it’s a favorite hunting ground for recruiters as well.
The features of GitHub’s service and community that have driven its popularity could appear opaque to non-coders. The guiding principle is that any and all possible barriers to one person contributing to someone else’s project must be stripped away. That means avoiding e-mail and conventional management. “That idea of not having to ask permission to be involved in something is really big,” says Preston-Werner.

Preston-Werner says GitHub, launched in 2008, has been profitable, and signs up around 10,000 new users every day. The newest feature of its business model is to rent out a version to companies they can use internally. In Marissa Mayer’s first company-wide memo after becoming CEO of Yahoo last year, she listed GitHub as one of the ways she intended to fix her company’s stifling bureaucracy.

GitHub’s most important feature is the pull request. It allows a person to suggest a modification to the code of someone else’s project, and shows that suggestion to the project’s owner in a way that makes it easy for them to review the changes. A single mouse click can merge them into the project or start a discussion about the changes. If a person’s pull request doesn’t stick, they can “fork” the project to create a parallel version on GitHub with their idea included.
GitHub’s only physical location is an office in San Francisco where about one-third of its 176 employees work (the rest work from their homes, coffee shops, or rented desks in the U.S. or overseas). No one at the company has set working hours. Some show up at noon and work into the night, others arrive close to dawn and disappear by midafternoon. Only Preston-Werner, as CEO, has a formal job title. Everyone else uses generic or frequently changing descriptors such as “Bad Guy Catcher” or “Señor Open Sorcerer.”

GitHub now plays a major supporting role in the creation of widely used open-source software, and the company uses it to maintain and expand its own service as well. Although Preston-Werner may set the overall goal of such projects, details of how it will be achieved are left to his workforce. Teams of GitHub workers form on an ad-hoc basis, growing, shrinking, and melting away as the company’s needs change and people find new things to work on.

Meetings are seen as a tragic waste of time, and thanks to the pull request, fewer are needed. “I don’t think we’ll ever have to hire managers,” says Preston-Werner.

Preston-Werner hopes his philosophy will spread and that more kinds of work will happen on GitHub. The platform already has features targeted at designers working on images. Some journalists, academics, and even the White House are also experimenting with GitHub to collaborate on articles and write research and policy documents. “Software is where we’re starting, but the vision can encompass a much broader scope than that,” says Preston-Werner.

An Armband Promises a Simpler Route to Gesture Control

Can an armband that controls gestures by measuring muscle activity make it as a mainstream gadget?
By Rachel Metz on July 26, 2013
When it comes to gesture-control systems like Microsoft’s Kinect, some applications—like gaming—are obvious. Others—like controlling your window blinds—are less so.

Yet that’s the kind of functionality Waterloo, Ontario-based startup Thalmic Labs is hoping will be possible with its first product, an armband called Myo that’s slated to start shipping late this year to some of the company’s earliest customers.

Gesture control has come a long way since Microsoft released the Kinect in 2010—the first truly mass-market gesture-control system. With Myo (pronounced “my-oh”), Thalmic Labs hopes a slew of recently enlisted developers will take things even further by building apps enabling the device to do everything from controlling virtual-reality systems to musical instruments (these ideas, plus the aforementioned hands-free window-blind control, were suggested by developers keen to get their hands on the device).

Myo stands out in a sea of gesture-control technologies, many of which rely on cameras or require bulky hardware to recognize your gestures and translate them into actions on a display. In addition to taking up space, such systems may need to be calibrated or require a certain amount of light to operate—all factors that can limit where and how you can use them (see “Look Before You Leap Motion”). And it’s still unknown how much consumers want to toss their computer mice, keyboards, and touch screens for gesture control.

Since the Myo armband interprets the electrical impulses generated by muscle movements in your forearm, it needs neither light nor a camera to operate. This, coupled with its relatively small size, could make it easier to use in darkened rooms or bright sunlight, and may offer neat mobile applications such as allowing it to control features on a smart watch.

Myo can tell the difference between different finger movements and sense hand rotations and movements by measuring the different electrical-impulse patterns that your movements generate and by using an inertial sensor to understand movements. With the band on your arm, you can do things like mimic shooting a gun to control a firearm in a video game, or swipe a hand to move through slides in a presentation. This information is sent to a processor in the armband, and an algorithm translates it into commands, which are sent via low-power Bluetooth to the gadget you’re trying to control, such as a smartphone.

Stephen Lake, a Thalmic Labs cofounder and its CEO, says the idea for Myo grew out of an unrelated project that he and fellow cofounder Matthew Bailey worked on as undergrads in the University of Waterloo’s mechatronics engineering program: a wearable assistive device for the blind that used a laser to scan for obstacles and translated that into tactile feedback. This got them thinking about how wearable devices may be the next big form factor in computing, and how they could be used to better interact with electronics. Last May, a week after graduation, Lake, Bailey, and their third cofounder (and fellow University of Waterloo mechatronics engineering student), Aaron Grant, moved into the office of Thalmic Labs.

Lake is careful not to narrow down the kinds of applications he’s hoping people will make for Myo—he says the company wants to “leave that creativity up to developers.” He mentions that Thalmic Labs has received a lot of interest from developers who are interested in paying the $149 to get their hands on the device before the general public (over 1,000 applications were submitted in the first 24 hours that signups were available), with suggested applications ranging from controlling musical instruments to operating window blinds. Eventually, Lake says, there will be a Myo directory where developers can list their apps.

The company is also interested in having its armband work with as many gadgets as possible. So far, Myo has been set up with devices including an iPhone, iPad, Mac and Windows computers, the Raspberry Pi computer, and a Parrot AR.Drone, as well as “a couple other industrial devices that I can’t really get much into,” Lake says. Thalmic Labs is also exploring how Myo can work with virtual-reality headset Oculus Rift and with Google’s head-mounted computer, Google Glass.

Despite not being on the market yet, Myo has taken off with consumers and investors: more than 30,000 people have preordered the device, which is slated to arrive early next year (at $149 apiece, that means Thalmic Labs will rake in at least $4.5 million in revenue when Myo starts shipping), and last month the company announced a $14.5 million series A funding round led by Intel Capital and Spark Capital.
All the positive attention could backfire, though, if Myo doesn’t work as well as it seems to in demo videos (one of which includes the tagline “effortless interaction”). Competitor Leap Motion, which uses a different kind of technology for its recently released gesture-control device, is facing this problem now, as its gadget has received lackluster reviews, including from MIT Technology Review.

There’s also the possibility that, beyond some obvious consumer applications like gaming, Myo will languish in relative obscurity. Gartner analyst Adib Ghubril says that while the device should be able to work well outdoors—useful for, say, controlling an unmanned aircraft—he expects Myo to be a niche device with few applications beyond gaming and the military.

“It’s not the next Google. It’s not like, ‘Oh my God, we’re all going with a Myo,’ ” he says.

Google Launches a Dongle to Bring Online Video to TV

Phones, tablets, and PCs can play online video on a TV set using Google’s cheap Chromecast device.
By Tom Simonite on July 24, 2013

Google launched a two-inch-long device costing $35 at an event in San Francisco today. The device is meant to bring the 200 billion videos watched online each month to regular TV sets. Called the Chromecast, the device resembles a regular USB thumb drive, but plugs into a television’s HDMI port and connects to a home Wi-Fi connection.

Once a Chromecast device has been installed and connected, it is possible to control it using Google’s YouTube app or an online player on an Android or Apple smartphone or tablet, or with a laptop, as long as that device is on the same Wi-Fi network. The Chromecast device then fetches video content from the Internet itself. Through the Chromecast it is possible to turn on a TV, change the volume, pause, skip, and play content.

Other content providers have jumped on board with the effort. Netflix has already integrated the functionality into its Android app, and the music-streaming service Pandora will soon also work with the Chromecast. The device went on sale today in the U.S. and is available from Amazon, Best Buy, and through Google’s Play store.

“Everyone loves their phone, tablets, laptops. Why not make them just work with your TV?” asked Mario Queiroz, the Google vice president who announced the Chromecast on stage. “Your personal device should be your remote.”

Queiroz stressed that any device—whether or not it was made by Google or is running the company’s software—can work with a Chromecast. “We will not force you to have the same operating system on all your devices.” He demonstrated how the YouTube app for iPhone could be used to send content to a TV with a Chromecast plugged in, and said any laptop using a Web video player with Chromecast enabled would be able to use the device.

Queiroz said more content announcements are on the way. “Our goal is to partner to create an ecosystem of apps as well as devices,” he said, claiming that developers of mobile and Web apps would need to make only minor changes to their existing apps to make them compatible.

Queiroz also hinted that the technology inside the Chromecast might soon appear inside other products. Adding the technology to television sets might make sense for both Google and TV manufacturers. “This is the first instantiation of Googlecast; over time we expect the functionality to be embedded in a range of devices,” said Queiroz.
The head of Google’s Chrome and Android projects, Sundar Pichai, said today that only 15 percent of U.S. households currently watch any online video on their TV sets.

No one from Google mentioned it today, but the Chromecast is Google’s second attempt at launching hardware to get more people watching online content on TVs, which are more usually served by broadcast and cable networks. In late 2010, Google worked with electronics manufacturers to launch set-top boxes branded as “Google TV” devices that could play YouTube and other Web content on TVs. However, the devices received poor reviews and sold only in limited numbers.

Internet companies, stung by earnings, still look pricey

By Ryan Vlastelica
Expedia Inc can send people to destinations around the world, but it can’t send investors back in time so they can avoid the stock’s massive selloff on Friday.

The stock’s 25 percent fall is its worst in seven years, becoming the latest in what is shaping up as a rough quarter for Internet company earnings. Expedia, Netflix and Google were hit hard after reporting earnings in the last two weeks.

Investors have chased this group higher in 2013, lured by expanding user bases and profit growth that eclipsed the broader market. But that has raised concern among analysts who see the sector as a whole as overvalued and ripe for a sell-off.

That appears to be what befell Expedia on Friday, as it suffered its biggest one-day loss since May 2006 after its results fell short of expectations.

“I don’t see any Internet stock that looks like a value,” said Kim Forest, senior equity research analyst at Fort Pitt Capital Group in Pittsburgh. “I’m a fan of them as a customer, but wouldn’t pay anywhere near the multiple we’re seeing for them.”

One measure of the valuation of these companies, intrinsic value as calculated by StarMine, a Thomson Reuters company, shows that of the 55 or so industries among the top 1,000 U.S. companies, Internet and catalog retailers are the most overvalued, and Internet software and services companies are the seventh-most overvalued.

Intrinsic value evaluates a stock based on projected growth over the next decade, using a combination of analyst forecasts and industry growth expectations.

The most overvalued name in the S&P 500 is Amazon.com Inc, which has climbed more than 20 percent this year to $309 a share and has a mammoth P/E ratio of 133.72. The online retailer’s price is 681 percent greater than its intrinsic value of $38.85.

Late Thursday, Amazon reported it had unexpectedly swung to a quarterly loss and gave a third-quarter outlook that was below forecasts. But despite the disappointment and elevated levels, its shares were up 2.8 percent on Friday.

“It is really looking down the line for other areas of profitability, and that could represent a positive play in the future,” said Chris Hobart, chief executive of Hobart Financial Group in Charlotte, North Carolina. “I’d be a buyer of it right now, but cautiously.”

That Amazon rose after its results “shows that investors are willing to just shrug off the negativity,” said Ryan Detrick, senior technical strategist at Schaeffer’s Investment Research in Cincinnati, Ohio, adding that options activity before the earnings suggested investors had not been bearish on the company before the news.

To justify its current price, Amazon would need to grow earnings 44 percent on a compound basis each year for the next 10 years, according to StarMine.

Shares of Netflix Inc are up 160 percent so far this year as investors bet the online movie renter’s expansion into original content will lead to strong subscriber gains. While earnings topped expectations earlier this week, its additions were in the middle of a range it forecast in April.

Netflix shares would need to drop 82 percent to reach its intrinsic value. Since hitting a new high on July 18, the stock has since been beaten down, falling nearly 10 percent.

Salesforce.com, meanwhile, is 83 percent above its intrinsic value. The maker of online sales software reports results next month.

Google Inc, another investor favorite this year, with gains of 25 percent, also saw a pullback after its results were issued last week, coming in below expectation despite a 20-percent jump in its core business revenue.

Almost all Internet stocks have high valuations. Netflix has a 12-month forward price-to-earnings ratio of 92.9, while Salesforce.com Inc’s is even higher at 100.5. The average stock in the S&P 500 has a P/E ratio of 14.6, according to Thomson Reuters data.

THE FACEBOOK EXCEPTION

While Facebook’s P/E ratio of 40.69 is more than twice the 18.57 ratio of its social media peers, it differs from other online companies in that it has traded fairly flat for most of this year.

After its highly anticipated initial public offering in 2012, the stock has been unable to regain its $38-per-share IPO price, as investors questioned whether it would be able to monetize its massive user base and mobile usage.

When its July 24 results indicated it was making progress in those areas, buyers jumped on the stock, pushing it up about 30 percent in its biggest-ever daily increase.

“Facebook is doing a good job on the innovation front, going from a weak area – mobile advertising – to creating something pretty damn powerful,” said Hobart.

Based on its Thursday close, Facebook is more than twice what StarMine indicates is its intrinsic value.

“The results justify the valuation, but with it at these levels there are other companies I would look at first,” said Hobart.

(Reporting by Ryan Vlastelica; Additional reporting by Angela Moon; Editing by Tim Dobbyn)

Google Buys a Quantum Computer

By QUENTIN HARDY

MAY 16, 2013, 5:00 AM

Google and a corporation associated with NASA are forming a laboratory to study artificial intelligence by means of computers that use the unusual properties of quantum physics. Their quantum computer, which performs complex calculations thousands of times faster than existing supercomputers, is expected to be in active use in the third quarter of this year.

The Quantum Artificial Intelligence Lab, as the entity is called, will focus on machine learning, which is the way computers take note of patterns of information to improve their outputs. Personalized Internet search and predictions of traffic congestion based on GPS data are examples of machine learning. The field is particularly important for things like facial or voice recognition, biological behavior, or the management of very large and complex systems.

“If we want to create effective environmental policies, we need better models of what’s happening to our climate,” Google said in a blog post announcing the partnership. “Classical computers aren’t well suited to these types of creative problems.”

Google said it had already devised machine-learning algorithms that work inside the quantum computer, which is made by D-Wave Systems of Burnaby, British Columbia. One could quickly recognize information, saving power on mobile devices, while another was successful at sorting out bad or mislabeled data. The most effective methods for using quantum computation, Google said, involved combining the advanced machines with its clouds of traditional computers.

Google bought the machine in cooperation with the Universities Space Research Association, a nonprofit research corporation that works with NASA and others to advance space science and technology. Outside researchers will be invited to the lab as well.

This year D-Wave sold its first commercial quantum computer to Lockheed Martin. Lockheed officials said the computer would be used for the test and measurement of things like jet aircraft designs, or the reliability of satellite systems.

The D-Wave computer works by framing complex problems in terms of optimal outcomes. The classic example of this type of problem is figuring out the most efficient way a traveling salesman can visit 10 customers, but real-world problems now include hundreds of such variables and contingencies. D-Wave’s machine frames the problem in terms of energy states, and uses quantum physics to rapidly determine an outcome that satisfies the variables with the least use of energy.

In tests last September, an independent researcher found that for some types of problems the quantum computer was 3,600 times faster than traditional supercomputers. According to a D-Wave official, the machine performed even better in Google’s tests, which involved 500 variables with different constraints.

“The tougher, more complex ones had better performance,” said Colin Williams, D-Wave’s director of business development. “For most problems, it was 11,000 times faster, but in the more difficult 50 percent, it was 33,000 times faster. In the top 25 percent, it was 50,000 times faster.” Google declined to comment, aside from the blog post.

The machine Google will use at NASA’s Ames Research facility, located near Google headquarters, makes use of the interactions of 512 quantum bits, or qubits, to determine optimization. They plan to upgrade the machine to 2,048 qubits when this becomes available, probably within the next year or two. That machine could be exponentially more powerful.

Google did not say how it might deploy a quantum computer into its existing global network of computer-intensive data centers, which are among the world’s largest. D-Wave, however, intends eventually for its quantum machine to hook into cloud computing systems, doing the exceptionally hard problems that can then be finished off by regular servers.

Potential applications include finance, health care, and national security, said Vern Brownell, D-Wave’s chief executive. “The long-term vision is the quantum cloud, with a few high-end systems in the back end,” he said. “You could use it to train an algorithm that goes into a phone, or do lots of simulations for a financial institution.”

Mr. Brownell, who founded a computer server company, was also the chief technical officer at Goldman Sachs. Goldman is an investor in D-Wave, with Jeff Bezos, the founder of Amazon.com. Amazon Web Services is another global cloud, which rents data storage, computing, and applications to thousands of companies.

This month D-Wave established an American company, considered necessary for certain types of sales of national security technology to the United States government.


This post has been revised to reflect the following correction:

Correction: May 17, 2013

An earlier version of this story stated that NASA was involved in the purchase of the quantum computer. While the computer will be located at NASA’s facility, it was not involved in the purchase of the computer.

Dollar soars, stocks gain amid talk of Fed QE exit

By Herbert Lash

NEW YORK | Fri May 17, 2013 10:55am EDT

(Reuters) – Global equity markets rose and the dollar soared against a basket of currencies on Friday, reaching a nearly three-year peak, as speculation mounted over whether the Federal Reserve would soon begin to rein in its asset-buying program.

Wall Street opened higher, with the benchmark S&P 500 rebounding from its worst decline in nearly three weeks, following gains in European shares that were lifted by carmakers cheered on by signs of a revival in domestic sales.

Also lifting stocks was a survey that showed a rebound in U.S. consumer sentiment in early May to the highest level in nearly six years as Americans felt better about their financial and economic prospects, particularly among upper income households.

The dollar’s strength was largely attributed to the euro, which fell to a six-week low on market talk that the European Central Bank could introduce negative deposit rates, a move that would make banks pay to park their cash overnight with the ECB.

The dollar index .DXY, which measures its value against a basket of six major currencies, rose to 84.312, its highest in nearly three years. It last traded at 84.262, up 0.81 percent on the day.

The euro fell 0.55 percent to $1.2810, while the dollar hit a 4-1/2 year high versus the Japanese yen, up 0.55 percent at 102.80.

“People are positive about the U.S. economic recovery despite recent weak data and today’s theme is mostly about the broadly strong dollar,” said Charles St-Arnaud, FX strategist at Nomura Securities.

“Meanwhile, data in the euro zone shows they remain in a recession and raised expectations the ECB will take further action is weighing on the euro,” he said.

A measure of global equity activity, MSCI’s all-country world stock index .MIWD00000PUS, rose 0.05 percent.

The Dow Jones industrial average .DJI was up 66.70 points, or 0.44 percent, at 15,299.92. The Standard & Poor’s 500 Index .SPX was up 9.98 points, or 0.60 percent, at 1,660.45. The Nasdaq Composite Index .IXIC was up 19.64 points, or 0.57 percent, at 3,484.89.

European shares .FTEU3 bounced off session lows to rise 0.23 percent to 1,248.30.

Gold fell for a seventh straight session, its longest losing streak in four years, driven by speculation the Fed may soon ease its asset-purchase program to boost the economy.

Spot gold prices fell $16.49 to $1,369.20 an ounce.

Comments on Thursday from John Williams, president of the Federal Reserve Bank of San Francisco, that the Fed could begin easing up on stimulus this summer stirred speculation.

Prices for U.S. Treasuries added to losses after the Thomson Reuters/University of Michigan’s preliminary reading on the overall index on consumer sentiment rose to 83.7 in early May from 76.4 last month, topping economists’ expectations for 78.

It was the highest level since July 2007.

The benchmark 10-year U.S. Treasury note was down 11/32 in price to yield 1.9175 percent.

In Europe, German Bunds hit one-week highs, with traders citing talk the ECB was checking with some banks on whether they were ready for a potential cut in its deposit rate to below zero.

German Bund futures rose as much as 43 ticks on the day to 145.74, before paring gains to trade 9 ticks higher.

Oil climbed towards $105 a barrel, rebounding from an earlier decline and heading for a small weekly gain, although concern about the strength of demand growth limited the rise.

Brent crude rose 78 cents to $104.56 a barrel. U.S. crude future added 68 cents to $95.84.

(Additional reporting by David Brett in London, Reporting by Herbert Lash; Editing by Chizu Nomiyama)

China Comes to Silicon Valley at One Startup Accelerator

A year after launch, a startup program is helping U.S. companies reach China—and vice versa.

By Jessica Leber on May 14, 2013

When Jon Bonanno, chief commercial officer of the clean-tech startup Empower Micro Systems, got up to face a small, packed room in Santa Clara, California, last week, it wasn’t like the polished “demo days” run by the highest-profile Silicon Valley startup accelerators. There was no stage, not even a screen for the projector. The sound system buzzed with painful feedback. The 100 or so guests stood or sat in folding chairs under bright fluorescent lights in a space adjoining a large startup workplace that contained a distinct no-no of Silicon Valley office culture: cubicles.

“This is demo day, Chinese style,” Bonanno had joked earlier.

The presentations marked the first anniversary of InnoSpring, an incubator backed mainly by Chinese investors that bills itself as the first to focus on opportunities for startups in both the United States and China. Whatever the scene lacked in frills, it more than made up for in the real opportunities provided to the companies presenting. What was notable for Bonanno, a Silicon Valley veteran, was that some of the top clean-tech investing firms with offices in both the U.S. and China were in the room.

InnoSpring, which in the last year has grown to house 40 companies in its 13,500-square-foot space and invest a total of $2 million in 12 of them, has come along at a compelling time for the startup marketplace in both nations. In China, a growing number of investors and large companies are looking to fund, acquire, and partner with U.S. startups—especially in sectors such as clean tech, where U.S. companies have struggled recently to raise funds and commercialize their products. And in the U.S., more startups are plotting to enter the massive Chinese market at earlier stages in their own maturity.

“We received 300 applications this year,” says Eugene Zhang, the Chinese-born Silicon Valley entrepreneur and angel investor who runs the program. In addition to clean tech, he has seen a lot of interest in startups pursuing cloud computing, big data, and opportunities in the growth of the mobile Internet. “Our primary focus is to build this ecosystem,” he says.

Empower is a good example of InnoSpring’s focus. The company makes a cheaper, more efficient design for microinverters, a technology that converts DC electricity produced by solar panels into AC electricity that can be used on the grid. It developed its prototypes right in the office—and, taking to heart the InnoSpring ethos of frugality, it has spent only $800,000 so far.

Bonanno says the designs could lower the cost of installing and producing solar power and provide a powerful new competitive opportunity to solar manufacturers by allowing them to mount the boxes right on their panels (today most inverters are bulky, separate appliances). The glitch is that most manufacturers are in China, where doing business is not straightforward to U.S. companies, says Bonanno.

Advisors at InnoSpring made introductions and helped Empower navigate visits to China, and the company has already signed a customer deal with one of the world’s top three solar-panel manufacturers, Bonanno says. Last week, it was seeking $5.5 million in new investment.

“What [Empower] have accomplished is a new model,” says Lei Yang, managing director of Northern Light Venture Capital, a Chinese firm that is one of InnoSpring’s backers. “Clean-tech companies here can get to a certain point with less capital. But to scale, they should think about Chinese customers and Chinese partners. That way you can get to the end of the tunnel a lot faster.”

This is a strategy that some U.S. startups are already pursuing. For example,EcoMotors, an engine technology company, recently signed a deal to have a Chinese power company build a manufacturing plant for its engines—at zero capital expense to the startup or its investors, board member Andrew Chung, a Khosla Ventures partner, said at the event.

Not all companies working at InnoSpring are new to China, and most aren’t involved in clean tech. Some, like the file transfer and sharing services DewMobile and Zapya, already have millions of users in China; they came to InnoSpring to create strategies for setting up shop in the United States. Others, such as Netspectrum, a payments company that uses QR codes for its Flash2Pay app, haven’t had much success raising money or finding users in the U.S. and are hoping for better luck in China, where the mobile market is less saturated.

Mobile security company TrustGo Mobile is the biggest success story so far. It came to InnoSpring around the time it launched about 18 months ago (see “TrustGo Promises to Guide You to Safer Apps”). Its CEO has worked in the U.S. and China and had founded the company in Silicon Valley, but with an R&D team based in Beijing. Its app picked up traction rapidly. Today, it has five million users in 80 countries and top ratings for virus detection from independent auditors.

At InnoSpring, TrustGo has participated in some of the weekly events and “office hours” held to introduce startups to mentors and major Chinese firms, like the Internet giant Tencent. Right now, TrustGo is weighing two acquisition offers, one from a U.S. company and one from a Chinese company (it won’t disclose more). At demo day, TrustGo CEO Xuyang Li accepted a graduation certificate. Whatever he decides, it looks as if InnoSpring will have its first successful exit.

Sharper Computer Models Clear the Way for More Wind Power

New prediction models can allow utilities to rely more heavily on wind and save millions.

By Kevin Bullis on May 14, 2013

The utility with the most wind power capacity in the United States, Xcel Energy, is relying more on this power source and saving millions of dollars thanks to new forecasting models similar to those used to predict climate change.

The forecasts, developed by the National Center for Atmospheric Research (NCAR) in Boulder, Colorado, could help address an increasing problem with wind power: how to integrate this intermittent resource into the power grid. NCAR is also developing improved models that could help forecast power from the sun.

“Having an accurate forecast allows us to bring on more renewable energy,” says Drake Bartlett, who is responsible for integrating renewables into Xcel’s grid. Utilities decide a day in advance which power plants will be operating on a given day. Inaccurate wind-power forecasts cause problems with this scheduling in two main ways. First, they force utilities to schedule backup power plants. These run inefficiently at low power, waiting to ramp up their production if less wind power is available than predicted. They waste fuel and are expensive to operate.

Second, bad forecasts make it difficult for utilities to justify shutting down baseload power plants, even if there might be more than enough wind to make them unnecessary. These plants—often coal or combined-cycle natural-gas plants—are expensive and time-consuming to shut down and restart. If they’re shut down and wind power is lower than expected, the utility has to use more expensive power from faster-responding plants or turn to the high-priced spot market. If, on the other hand, the utility leaves the baseload plants on, it may have to curtail wind power, perhaps by telling wind farms to turn off some turbines. In that case, the utility loses money two ways. It has to pay for the fuel to run the baseload plants, even though it didn’t really need power from them. And under its contract with wind farm operators, it still has to pay for the wind power it isn’t using.

The old forecasts allowed these scenarios to arise often. On average, predictions differed from actual power output by 20 percent, and sometimes they were as much as 50 percent off. The new forecasts decrease error by 30 to 40 percent, giving Xcel the confidence to reduce the number of backup plants online, Bartlett says. This has saved the utility $22 million in fuel. It hasn’t calculated the additional cost savings that have come from being able to avoid the spot market.

The new forecasts are also accurate enough to support shutting down baseload power. “A few years ago, we didn’t have the confidence to shut off baseload plants,” Bartlett says. “Now, if it’s a long weekend with beautiful weather and lots of wind, we’ll shut down a coal plant. That allows us to integrate more renewable energy.”

NCAR took several steps to make better forecasts. It improved on previous weather prediction models by running them at finer increments of time and space—something that requires added computing power. It combined its models with those from other organizations and with measurements of actual conditions at wind farms to predict wind speeds. Crucially, it then converts these wind speed predictions to estimates of how much power wind farms will produce, something that can differ considerably from what manufacturers claim (see “Better Computer Models Needed for Mega Wind Farms”).

Also, instead of just running a model once, NCAR runs it multiple times. The average result is typically more accurate, says Sue Ellen Haupt, director of the center’s Weather Systems Assessment Program.

Even though the models are run at high resolution, they don’t catch everything. The next step is to focus on better ways to predict two kinds of events—changes in wind speed, and weather that causes ice to form on wind turbine blades.

Fast changes in wind speed can be particularly difficult to deal with on the grid (see “Wind Turbines, Batteries Included, Can Keep Power Supplies Stable”). Icing predictions will also be important; it’s difficult to know just when a storm will create the right conditions to deposit ice on wind turbines, but when ice does form, it can greatly decrease the amount of power a turbine can generate. In the past, forecasts have led Xcel to plan for large amounts of wind, only to have power production drop off unexpectedly when ice formed.

Predicting solar power may be a bigger challenge altogether (see “A Solution to Solar Power Intermittency”). Power output from solar panels can change in seconds, and clouds are among the hardest things to account for in climate models. NCAR is using data from satellites and land-based sensors to improve its cloud predictions, and it’s working to predict how different amounts of sunlight (and other factors like temperature) translate into actual power output from solar panels.