A Gooey Cure for Crack-Prone High-Capacity Batteries

Polymer glue helps fracture-prone high-capacity batteries last through more charges.

By Katherine Bourzac

If electric cars are ever to drive hundreds of miles between charges—as they must to compete with gas-powered cars—their batteries will need to store much more energy. Unfortunately, several of the most promising high-capacity battery materials are prone to breaking in ways that would cut an electrified road trip short.

Now researchers at Stanford University have shown that mixing one such promising battery material, silicon microparticles, with self-healing polymers helps prevent a longer-lasting battery from failing. They say the self-healing polymers could stabilize other promising but damage-prone battery materials.

The self-healing battery’s negative electrode, or anode, combines silicon with polymers that act like chemical zippers, healing cracks that form when the battery is used and recharged.

The self-healing battery electrode has so far been tested with pure lithium metal as the positive electrode, because its storage capacity is much greater than that of any conventional cathode. The self-healing electrode itself has eight times the storage capacity of the carbon anodes found in a conventional rechargeable lithium-ion battery. If paired with a conventional cathode, it would create a battery that stored about 40 percent more energy. If paired with a correspondingly high-capacity cathode, total energy storage would be doubled or tripled.

While previous silicon batteries could only be discharged and recharged 10 times before breaking down, the self-healing battery weathers 100 charging cycles. But that’s still not enough, acknowledges Stanford materials scientist Yi Cui. “We need to go to 500 cycles for portable electronics, and a few thousand for electric vehicles,” Cui says.

Still, Cui’s approach may provide a new way forward for promising materials that have been stalled. “This points to a way to solve a general problem with highcapacity anodes,” says Paul Braun, a materials scientist at the University of Illinois at Urbana-Champaign who is not involved in the work.

Silicon anodes take in large amounts of lithium when the battery is charged, and release all that lithium as the battery is put to use. Such anodes can store a lot of energy in a small space, but their high capacity is a liability as far as the materials they’re made of are concerned: as large amounts of lithium enter and leave the battery, the silicon expands and contracts, cracking the anodes the first time they’re used; the same thing happens to anodes made of tin and germanium.

For the self-healing battery, Cui collaborated with another Stanford researcher,Zhenan Bao, who had previously developed self-healing electronic skin based on a stretchy, sticky polymer (see “Electric Skin that Rivals the Real Thing”).

 

When the polymer is fractured, it flows back together. The group mixed in some conductive carbon particles to ensure that the polymer, which isn’t conductive, wouldn’t impede the flow of electricity through the battery. This gooey mixture was then combined with silicon microparticles to make an anode. When the battery is charged and discharged, the silicon still expands, contracts, and fractures, but the polymer pulls everything back together. “Normally, once the anode cracks, you lose electrical contact,” says Cui. “The self-healing polymer ties the broken parts back together.”

There are other ways to deal with silicon’s tendency to crack. Cui’s group has experimented with nanostructured forms of silicon, including nanowires, that can withstand the strain of charging and recharging. Nanostructured silicon anodes like this are being developed by Amprius, a Sunnyvale, California company that Cui cofounded. However, researchers and companies are still learning about these nanomaterials. “It’s easy to get your hands on a small vial of nanostructured silicon, but to make 50 or 60 tons at a reasonable cost is a big problem that hasn’t been solved,” says Braun.

Cui says the combination of microparticles with the healing polymer could be less expensive and more practical for high-capacity batteries than approaches that require expensive nanomaterials. The silicon microparticles used in the self-healing battery demonstration can be bought off the shelf in large quantities and aren’t very expensive.

Nancy Sottos, a materials scientist at the University of Illinois at Urbana-Champaign, has developed yet another approach: Sottos mixes capsules of healing materials in with the battery materials. One such material is a bubble that bursts to release conductive metal to heal electrical connections in a damaged battery. Her group has made early proof-of-concept demonstrations using this method.

 

Yuegang Zhang, a battery researcher at the Lawrence Berkeley National Laboratory, says the Stanford self-healing binder shows promise for other kinds of high-capacity battery materials, such as tin. Zhang has taken a different approach in his own work, mixing tin nanostructures with stretchy, strong, conductive graphene to hold the anodes together. Noting the small number of times Cui’s silicon batteries can be recharged, he says, “silicon still has problems, but I like this idea.”

Now that they’ve made the first demonstration, Cui and Bao are working on fixes that would allow their self-healing silicon battery to go through more charge cycles. “We’re just starting,” Cui says.

Quantum Light Harvesting Hints At Entirely New Form of Computing

Emerging Technology From the arXiv

November 25, 2013

Light harvesting in plants and bacteria cannot be properly explained by classical processes or by quantum ones. Now complexity theorists say the answer is a delicate interplay of both, an idea that could transform computation.

Physicists have long known that plants and bacteria convert light into chemical energy in a way that is hugely efficient. But only in recent years have they discovered that the molecular machines behind this process rely on quantum mechanics to do the job.

That’s a big surprise because of the temperatures involved. Quantum states are highly fragile—sneeze and they disappear in a puff of smoke. Physicists can maintain these states for some time in carefully controlled environments at low temperature but nobody can explain how it can be possible in the warm wet environments inside living things.

Today,  Gabor Vattay at Eotvos University in Budapest and Stuart Kauffman at the University of Vermont in Burlington have the answer. They say the processes behind light harvesting are a special blend of the quantum and the classical. And that this delicate mix represents an entirely new form of computing that nature might exploit in other systems too.

The quantum processes must studied in light harvesting systems occur in a structure known as the Fenna-Matthews-Olson or FMO complex, a huge pigment protein that is part of the light gathering machinery in green sulphur bacteria. Embedded in these protein structures are reaction centres that convert the energy from light into chemical energy.

When light hits the FMO complex, the energy must travel across the protein matrix until it reaches a reaction centre. And amazingly, this transfer occurs with an efficiency of almost 100 per cent.

That’s puzzling because the only way for the light energy to find a reaction centre is to bounce through the protein network at random, like a ricocheting billiard ball. This process would take too long, much longer than the nanosecond or so it takes for the light energy to dissipate into the environment and be lost.

So the energy transfer process cannot occur classically in this way. Instead, physicists have gathered a variety of evidence showing that the energy transfer is a quantum process.

The thinking goes like this. Because energy can exist in a superposition of states, it can travel a variety of routes around the network at the same time. And when it finds the correct destination, the superposition collapses, leaving the energy at the reaction centre. The result is an almost perfect transfer of energy.

But Vattay and Kauffman say that this kind of pure quantum process this cannot be responsible either. That’s because a number of quantum processes slow down the movement of quantum objects through random networks like this. “Quantum mechanics has adverse effects too,” they say.

One of these party-poopers is known as Anderson localisation, a phenomenon that prevents the spread of quantum states in random media. Because the quantum state acts like a wave, it is vulnerable to interference effects, which prevent it propagating in a random network.

Another is the quantum zeno effect, the paradoxical phenomenon in which an unstable state never changes if it is watched continuously. That’s because watching involves a serious of measurements that constantly nudge the state, preventing it from collapsing.  This is the quantum version of the watched-pot-never-boils effect.

A similar thing happens to the quantum state of the energy during light harvesting. This quantum state will inevitably interact with the environment but these interactions act like measurements. This triggers a quantum zeno-like effect that prevents the state from collapsing at the reaction centre. So the energy transfer cannot occur in this way, say Vattay and Kauffman.

Instead, they propose a new process in which the quantum search mechanism and the interaction with the environment combine to overcome Anderson localisation. It is the interplay between these processes that delivers the energy to the reaction centre in an optimal way, they say.

The idea is that the interaction with the environment changes the wave-like nature of the quantum state just enough to prevent Anderson localisation. At the same time, the quantum zeno effect extends the life time of the quantum state allowing it to find its way to the reaction centre. It is this interplay between the quantum and classical worlds that allows the energy transfer.

That explains the quantum-like behaviour of light harvesting processes at room temperature. But Vattay and Kauffman say the idea has other important implications. The problem of finding a reaction centre in a protein matrix is formally equivalent to many other problems in computing. So it ought to be possible to turn light harvesting to the task of computing by mapping one problem onto the other.

That could dramatically improve computational speeds at room temperature. “Computers based on artificial light harvesting complexes could have units with 100-1000 times larger efficiency at room temperature,” say Vattay and Kauffman.

What’s more, this kind of computation may already be at work in nature. “Since the realization of this mechanism seems now relatively easy, it is an important question if it has been realized in light harvesting systems or is also present in other biological transport or optimization processes. Especially in the human brain,” they say.

If they are right, this new kind of computation could generate a flurry of interest in a short space of time.

Online Anonymity in a Box, for $49

A cheap device called the Safeplug makes it easy to use the Tor anonymity network at home.

By Tom Simonite

Tor, a privacy tool used by activists, criminals, and U.S. intelligence to obscure traces of their online activities, is being repackaged for the mass market. A $49 device launched today and targeted at consumers makes it relatively easy to route a home Internet connection through the Tor network. The Safeplug, as the device is called, can also block most online ads.

“It’s meant to be a mass-market product,” says Jed Putterman, chief product officer of PogoPlug, the company that developed the Safeplug and whose main business is providing cloud storage and backup for home use. “We wanted to make a family-friendly way to get the protection Tor offers.”

The most straightforward way of using Tor today involves downloading a bundle of software, including a new Web browser, onto each device a person wants to use anonymously. The Safeplug, in contrast, is a small box that is simply plugged into a home Internet router to allow any Internet-connected device to make use of Tor. The Safeplug acts as a proxy server, meaning that computers on the same network use it as a go-between to access the Internet. The device also has a built-in advertising blocker, which is disabled by default.

Putterman hopes the device will appeal to families who wish to prevent their ISPs or online companies such as ad networks from being able to connect their IP address or identity with their online activity. IP addresses can be used to reveal a person’s location (see “Tracking Trick Shows the Web Where You Are”), and to link diverse threads of online activity into behavioral profiles for advertising purposes. Safeplug may also appeal to those disturbed by recent revelations about NSA surveillance (see “Circumventing Encryption Frees NSA’s Hands Online”).

However, Mehmet Güneş, an assistant professor at the University of Nevada, Reno, who studies anonymity tools, says that users of the Safeplug will only remain truly obscure if they adjust their online behavior in other ways. “Tor provides unlinkability from source to destination, and people confuse that with anonymity,” he says. While using Tor people can easily leak identifying information via the Flash plug-in, other media add-ons, or information they type or send, says Güneş.

The Tor Project’s download page cautions that “You need to change some of your habits,” for just those reasons. It recommends disabling all browser plug-ins.

Another challenge for the Safeplug is that Tor’s design causes it to slow down Web traffic. Putterman suggests that people set their device to apply Tor only to Wi-Fi connections to protect phones, tablets, and laptops, while leaving devices using wired connections for bandwidth-intensive tasks such as streaming TV or gaming to function as normal.

The Tor network keeps Internet traffic private by making it take an indirect route. The process hides users’ IP address from the services they’re accessing, and prevents an ISP or other entities that may be monitoring the connection from knowing what those services are. Data from a Tor user hops via three “relays,” which are run by volunteers, on its way to its destination, a process mediated by encryption technology that prevents any relay from knowing the ultimate origin or destination of the data (see “Dissent Made Safer”).

Owners of a Safeplug can also set it to act as a relay to help out other people using Tor. Today there are over 4,000 Tor relays around the world, but Putterman believes his device will lead to the appearance of many more. “We hope to add tens of thousands of Tor relays,” he says. “Relays don’t use a lot of bandwidth and really help the community.”

Güneş says the addition of more relays would fortify the Tor network: “A larger crowd helps you anonymize better.” The addition of more Tor relays could also improve the performance of the network, reducing the bandwidth hit for people using it.

Data on how Tor is used today is hard to come by. A study by Güneş two years ago found that the vast majority of Tor traffic is ordinary Web browsing, so Safeplug may appeal to people already using the network this way.

 

There are over three million Tor connections in use today, although the number is declining after a spike this summer. The jump is believed to have been caused by a malicious software package using the network to communicate with its operators.

The original development of Tor was supported by the U.S. Naval Research Laboratory. The nonprofitTor Project, which now maintains the tool, gets most of its support from the U.S. government, mostly the Department of Defense and the State Department.

The Internet of Things, Unplugged and Untethered

A startup called Iotera wants to let you track your pets, your kids, or your belongings without relying on commercial wireless networks.

By Rachel Metz

The iPhone wouldn’t stop chirping. On a recent morning I was riding in a car through Silicon Valley with three people from a startup called Iotera. A small tracking tag was attached to the passenger-side sun visor. Our mission was to see how far we could drive from Iotera’s office building before the tag would stop transmitting its location to a small base station on the building’s roof—which meant the location-logging app on the phone would go silent.

It took several miles. That’s good news for Iotera, which is developing tracking technology that can work throughout cities without requiring access to a commercial wireless network or even a short-range wireless protocol like Bluetooth. The system uses GPS-embedded tags that can last for months on a single charge, occasionally sending their coordinates over unlicensed wireless spectrum to small base stations with a range of several miles.

Iotera expects businesses to use its technology to track everything from tools on construction sites to workers in dangerous places like oil rigs. Or people might use it to keep an eye on their pets. Iotera’s founders say two companies (which it won’t name) are trying it out. One is using it to help parents monitor their children’s whereabouts, and the other is tracking company-owned devices.

Iotera is taking a risk by trying to sell its own wireless base stations. But the market for the Internet of Things—wherein normally unconnected devices are connected to the Internet so they can be tracked or made more functional—is growing fast. Networking equipment maker Cisco Systems estimates that there are 10.9 billion “people, processes, data, and things” connected to the Internet, and the company expects this to rise to as high as 50 billion by 2020.

Iotera grew out of an idea cofounder Ben Wild had for a long-range wireless tracking network. He thought such a network would be perfect for keeping tabs on animals, even though he doesn’t have any himself. “I just thought it would be cool to track pets,” he says. “And I thought this cool new wireless technology that I had an idea of how to build could really enable this market.”

 

Wild has been working on wireless tech for years. Before Iotera he founded Wirama, a maker of RFID product-locating technology that Checkpoint Systems bought in 2009. When I visited Iotera’s Redwood City office—a small suite in a startup-filled building with desks covered in prototypes of sensing tags and base stations—Wild, along with cofounder Robert Barton and software engineer Esther Rasche, demonstrated how their new technology works.

Wild handed me a sensor tag in a 3-D-printed case about the size of a small matchbox. If you clipped one to your dog’s collar, it would occasionally log Fido’s location and report it back to a small access point connected to the Internet. From there, it would be punted to Iotera’s servers, and then to a website or mobile app. Under what Wild calls “typical operating conditions,” the tag’s battery would last up to five months.

Fido’s location data could be transferred over as many as four miles in a suburban area, or two miles in a dense urban one, where more things can interfere with the signals.

To get a sense of the tracking in action, we jumped into Barton’s car, which was set up with a tag. We wound around hills, listening to a steady stream of chirps emanating from an iPhone app, which gathered location data from the access point and showed us the car’s movement on a map. The noise petered out as we hit Interstate 280 near a community college about four miles from Iotera’s office.

 

A cellular chip would eliminate the need for a base station and still let the tags work over a broader area, but Wild says it would require too big of a trade-off in battery life and sensor size. Not to mention, it would cost a lot more, given the monthly fee to use a wireless carrier’s network. Given that each of its base stations is a few hundred dollars, under a foot tall, and has a range of several miles, Iotera’s founders believe they can cover a whole city with just a handful of them.

However, each station can support only a certain number of tags, depending on the application it’s being used for. For an application that requires infrequent data transfers, like monitoring water meters each hour, an access point could handle 10,000 or more tags. But if you want to track a lost pet and transmit its GPS location every 30 seconds, Barton says, a station could support just hundreds.

Software Mines Science Papers to Make New Discoverie

Software digests thousands of research papers to accurately identify proteins that could prove valuable cancer drug targets.

 

 Software that read tens of thousands of research papers and then predicted new discoveries about the workings of a protein that’s key to cancer could herald a faster approach to developing new drugs.
The software, developed in collaboration between IBM and Baylor College of Medicine, was set loose on more than 60,000 research papers that focused on p53, a protein involved in cell growth implicated in most cancers. By parsing sentences in the documents, the software could build an understanding of what is known about enzymes called kinases that act on p53 and regulate its behavior; these enzymes are common targets for cancer treatments. It then generated a list of other proteins mentioned in the literature that were probably undiscovered kinases, based on what it knew about those already identified. Most of its predictions tested so far have turned out to be correct.

“We have tested 10,” Olivier Lichtarge of Baylor said Tuesday. “Seven seem to be true kinases.” He presented preliminary results of his collaboration with IBM at a meeting on the topic of Cognitive Computing held at IBM’s Almaden research lab.

Lichtarge also described an earlier test of the software in which it was given access to research literature published prior to 2003 to see if it could predict p53 kinases that have been discovered since. The software found seven of the nine kinases discovered after 2003.

“P53 biology is central to all kinds of disease,” says Lichtarge, and so it seemed to be the perfect way to show that software-generated discoveries might speed up research that leads to new treatments. He believes the results so far show that to be true, although the kinase-hunting experiments are yet to be reviewed and published in a scientific journal, and more lab tests are still planned to confirm the findings so far. “Kinases are typically discovered at a rate of one per year,” says Lichtarge. “The rate of discovery can be vastly accelerated.”

Lichtarge said that although the software was configured to look only for kinases, it also seems capable of identifying previously unidentified phosphatases, which are enzymes that reverse the action of kinases. It can also identify other types of protein that may interact with p53.

The Baylor collaboration is intended to test a way of extending a set of tools that IBM researchers already offer to pharmaceutical companies. Under the banner of accelerated discovery, text-analyzing tools are used to mine publications, patents, and molecular databases. For example, a company in search of a new malaria drug might use IBM’s tools to find molecules with characteristics that are similar to existing treatments. Because software can search more widely, it might turn up molecules in overlooked publications or patents that no human would otherwise find.

“We started working with Baylor to adapt those capabilities, and extend it to show this process can be leveraged to discover new things about p53 biology,” says Ying Chen, a researcher at IBM Research Almaden.

It typically takes between $500 million and $1 billion dollars to develop a new drug, and 90 percent of candidates that begin the journey don’t make it to market, says Chen. The cost of failed drugs is cited as one reason that some drugs command such high prices (see “A Tale of Two Drugs”).

Lawrence Hunter, director of the Center for Computational Pharmacology at the University of Colorado Denver, says that careful empirical confirmation is needed for claims that the software has made new discoveries. But he says that progress in this area is important, and that such tools are desperately needed.

The volume of research literature both old and new is now so large that even specialists can’t hope to read everything that might help them, says Hunter. Last year over one million new articles were added to the U.S. National Library of Medicine’s Medline database of biomedical research papers, which now contains 23 million items. Software can crunch through massive amounts of information and find vital clues in unexpected places. “Crucial bits of information are sometimes isolated facts that are only a minor point in an article but would be really important if you can find it,” he says.

Lichtarge believes that software like his could change the way scientists conduct and assess new research findings. Scientists currently rely in part on the reputation of the people, institutions, and journals involved, and the number of times a paper is cited by others.

Software that gleans meaning from all the information published within a field could offer a better way, says Lichtarge. “You might publish directly into the [software] and see how disruptive it is,” he says.

Hunter thinks that scientists might even use such tools at an earlier stage, having software come up with evidence for and against new hypotheses. “I think it would really help science go faster. We often waste a lot of time in the lab because we didn’t know every little thing in the literature,” he says.

 

 

Apple acquires Israeli 3D chip developer PrimeSense

(Reuters) – Apple Inc has bought Israel-based PrimeSense Ltd, a developer of chips that enable three-dimensional machine vision, the companies said on Monday, a move that signals gesture-controlled technologies in new devices from the maker of iPhones and iPads.

An Apple spokesman confirmed the purchase but declined to say how much it spent or what the technology will be used for. Israeli media said Apple paid about $350 million for PrimeSense, whose technology powers the gesture control in Microsoft Corp’s Xbox Kinect gaming system.

“Apple buys smaller technology companies from time to time, and we generally do not discuss our purpose or plans,” an Apple spokesman said in an e-mail.

A spokeswoman for PrimeSense said: “We can confirm the deal with Apple. Further than that, we cannot comment at this stage.”

It was the second acquisition of an Israeli company by Apple in less than two years. Apple bought flash storage chip maker Anobit in January 2012.

PrimeSense’s sensing technology, which gives digital devices the ability to observe a scene in three dimensions, was used to help power Microsoft’s Xbox Kinect device.

The Israeli company has licensed the technology to Microsoft but it is unclear how that deal changes with Apple’s acquisition of PrimeSense, which provides the technology behind Kinect’s visual gesture system.

Apple and Microsoft have other licensing deals between them. Microsoft did not return a call seeking comment.

Analysts are expecting PrimeSense’s technology to show up in Apple devices in about 12-18 months from now, potentially in the often-speculated device for the living room such as a television, dubbed iTV by fans.

“While we have not had any more evidence of an iTV coming in the next 6 to 12 months, some sort of living room appliance is in Apple’s future and gesture technology could be critical,” Peter Misek, analyst with Jefferies said in a note to clients.

Apple’s interest in PrimeSense was first reported in July by Israeli financial newspaper Calcalist.

With Nokia, Microsoft to Invest $2B More in Wireless Chips

By Jennifer Booton

FOXBusiness

Microsoft’s (MSFT) $7.2 billion purchase of Nokia’s (NOK) device business will make it one of the world’s biggest buyers of silicon products, helping it to expand its line of tablets and smartphones, according to IHS (IHS).

The Redmond, Wash.–based Windows software maker, which announced plans in September to buy the heart of Nokia’s smartphone business, will buy an estimated $5.9 billion worth of semiconductors in 2014.

That’s up from $3.78 billion this year and $3.55 billion in 2012, making Microsoft the world’s eighth biggest purchaser of chips and enabling it to improve its gadgets and better compete with larger rivals like Samsung and Apple (AAPL).

To give some perspective, Microsoft was just the 13th and 15th largest in 2012 and 2013, far lagging the industry leaders.

Of that $5.9 billion, IHS says Microsoft will use roughly 37% – or about $2.2 billion – on chips for wireless devices like smartphones and tablets, a sharp rise from the mere $110 million in spent on them last year.

“One challenge for Microsoft will be formulating a strategy for success and deeper penetration of its smartphone and tablet lines,” said Myson Robles-Bruce, senior analyst of semiconductor spend and design for IHS.

Microsoft has said that it doesn’t expect the Nokia buy to actually start driving significant improvements to profits until 2016. The deal faces Nokia shareholder approval next Tuesday and remains subject to antitrust approvals.

Yahoo increases share buyback authorization by $5 billion

BY ALEXEI ORESKOVIC

SAN FRANCISCO Tue Nov 19, 2013 5:50pm EST

(Reuters) – Yahoo Inc said it has increased its share repurchase authorization by $5 billion and that it planned to offer $1 billion in convertible notes.

Shares of Yahoo increased 1.6 percent to $35.17 in after hours trading on Tuesday following the announcement.

Yahoo has aggressively repurchased its common stock in recent quarters using cash obtained from selling a portion of its stake in Chinese e-commerce giant Alibaba Group. In the first nine months of 2013, Yahoo spent $3.1 billion on share buybacks.

The buybacks have helped boost Yahoo’s shares roughly 74 percent this year, even as the Web portal’s revenue growth has remained stagnant amid competition from Facebook Inc, Google Inc and Twitter Inc.

Yahoo said the convertible notes will be due in 2018, with interest payable semi-annually in arrears on June 1 and December 1 of each year, beginning on June 1, 2014.

The interest rate and other terms of the senior unsecured notes will be determined at the time of pricing Yahoo said. The company also intends to grant the initial purchasers of the notes the right to buy an additional $150 million in notes.

Microsoft’s Gates highlights tough requirements for new CEO

BY BILL RIGBY

BELLEVUE, Washington Tue Nov 19, 2013 6:28pm EST

(Reuters) – Chairman Bill Gates said on Tuesday he was pleased with Microsoft Corp’s progress in finding a new chief executive but outlined the difficulties in picking the next leader of the world’s largest software company as it seeks to reinvent itself as a mobile computing power.

Gates is part of the four-man committee that gave itself a year to find a successor to Chief Executive Officer Steve Ballmer after he announced his plan to retire in August. Sources close to the process have said the search is down to a handful of candidates, but the company itself has been largely silent.

“We’ve been doing a lot of meetings with both internal and external candidates and we’re pleased with the progress,” said Gates at Microsoft’s annual shareholder meeting in Bellevue, Washington. “We’re looking at a number of candidates and I’m not going to give a timeline today.”

Ballmer said in August he planned to retire within 12 months, and the CEO search committee – headed by lead independent director and former IBM executive John Thompson – tasked itself with finding a replacement by the end of that period. Sources close to the company expect an appointment no later than January.

Gates, who in previous years did not address the shareholders’ meeting with prepared remarks, went on to describe the challenges of finding the right person to lead Microsoft.

“It’s a complex role to fill – a lot of different skills, experience and capabilities that we need,” he said. “It’s a complex global business the new CEO will have to lead. The person has to have a lot of comfort in leading a highly technical organization and have an ability to work with our top technical talent to seize the opportunities.”

Gates paused briefly and choked up with emotion after he thanked Ballmer for his work at the company, saying both he and Ballmer have a commitment “to make sure the next CEO is the right person, for the right time, for the company we both love.” Gates and Ballmer are the only two CEOs in Microsoft’s 38-year history.

Gates, who co-founded Microsoft with Paul Allen in 1975, then left the stage and sat in the front row of an audience of around 400 people, alongside other members of the board. That was a departure from previous years when he remained onstage and occasionally answered questions.

Microsoft has not shed much light on its CEO search, but sources close to the process have told Reuters the company has narrowed its shortlist of candidates to just a handful, including Ford Motor Co chief Alan Mulally and former Nokia CEO Stephen Elop, as well as former Skype CEO and internal candidate Tony Bates, now responsible for Microsoft’s business development.

Microsoft remains highly profitable and last month beat Wall Street’s quarterly profit and revenue forecasts.

But the company has come under criticism for missing some of the largest technology shifts in the past few years from Internet search to social networking, and Apple Inc and Google Inc are now at the vanguard of a mobile computing revolution that is eroding its core PC-based business.

Microsoft’s shares closed down 0.5 percent at $36.74 on Nasdaq.

Liquid Metal Printer Lays Electronic Circuits on Paper, Plastic and Even Cotton

A simple way to print circuits on a wide range of flexible substrates using an inkjet printer has eluded materials scientists. Until now.

One of the dreams of makers the world over is to be able to print electronic circuits on more or less any surface using a desktop printer. The great promise is the possibility of having RFID circuits printed on plastic or paper packaging, LED arrays on wallpaper and even transparent circuits on glass. Or simply to rapidly prototype circuits when designing new products.

There are no shortage of conducting inks that aim to do this but they all have drawbacks of various kinds. For example, many inks have low or difficult-to-control conductivity or need to be heated to temperatures of up to 400 degrees C after they have been printed thereby limiting the materials on which they can be printed. The result is that the ability to print circuits routinely on flexible materials such as paper or plastic has remained largely a dream.

Until now. Today, Jing Liu and pals at the Technical Institute of Physics and Chemistry in Beijing say they’ve worked out how to print electronic circuits on a wide range of materials using an inkjet printer filled with liquid metal.  And they’ve demonstrated the technique on paper, plastic, glass, rubber, cotton cloth and even an ordinary leaf.

The new technique is straightforward. The magic sauce is a liquid metal: an alloy of gallium and indium which is liquid at room temperature. They simply pump it through an inkjet printer to create a fine spray of liquid metal droplets that settle onto the substrate.

The droplets rapidly oxidise as the travel through the air and this oxide forms a surface layer on each drop that prevents further oxidisation. That’s handy because the liquid metal itself does not easily adhere to the substrates. But the metal oxides do and this is the reason, say Jing and co, that the circuits adhere so well to a wide range of surfaces.

They also say it’s relatively easy to create almsot any circuit pattern, either by moving the printer head over the substrate or by using a mask.  And they’ve demonstrated this by printing conducting circuits on cotton cloth, plastic, glass and paper as well as on a leaf.

That looks to be a useful development. The ability to print circuits in magazines or on t-shirts will surely attract much interest. And being able to test circuit designs by printing them out using a desktop printer will be invaluable to many makers.

Perhaps most exciting of all is that the technology behind all this is cheap and simple: there’s no reason why it couldn’t be pushed to market very rapidly. And that raises the prospect of being able to print prototype circuits in small businesses and even at home.

Could it be that liquid metal printers could bring about the same kind of revolution in home-built electronics that 3D printers triggered with material design? And might it be possible to combine them into a single machine that prints functional electronic devices?