IBM overtakes Trend Micro as No. 3 security software maker

IBM overtook Japan’s Trend Micro Inc to become the world’s No. 3 provider of security software last year, after acquiring cybersecurity firm Trusteer, according to market share data released on Tuesday by Gartner.

Symantec Corp and Intel Corp’s McAfee retained their slots as the top two makers of security software in a market whose sales last year rose 4.9 percent to $19.9 billion (11.8 billion pounds), according to the annual survey by the Connecticut-based research firm.

IBM’s revenue from security software climbed 19 percent to $1.14 billion last year, Gartner said. In a deal announced last August, IBM paid close to $1 billion for Trusteer, whose products help businesses fight malicious software and cyber fraud, as part of an effort to boost its line of security offerings.

Meanwhile, Trend Micro’s sales dropped 5.3 percent to $1.11 billion, according to the Gartner survey.

Symantec’s revenue fell 0.3 percent to $3.7 billion. The company replaced its CEO in March, marking the second time it has replaced its leader in two years as its board looks to stimulate revenue growth and its stock price.

McAfee’s revenue rose 3.9 percent to $1.7 billion. EMC Corp, which owns RSA Security, saw revenue climb 5.9 percent to $760 million, putting it into the No. 5 slot in the market, according to Gartner’s survey.

The Light Bulb Gets a Digital Makeover

Electric lights are 135 years old. The Internet is 45. They’re finally getting connected.

To demonstrate how the Internet is changing one of the oldest and least exciting technology businesses around, Shane De Lima, an engineer at Philips Lighting, took out his smartphone. A flick across the screen sent a message to a nearby Wi-Fi router and then to a wireless hub, which shot a radio command to a chip in the base of an LED lamp in front of us.

A moment later, the conference room where we were sitting darkened.

It may seem like Rube Goldberg’s idea of how to turn off a light. Or it could be the beginning of how lighting companies such as Philips find their way from selling lighting hardware into networks, software, apps, and new kinds of services.

 The introduction of networked lights is happening because of another trend. Manufacturers have been replacing incandescent and fluorescent lights with ultra-efficient LEDs, or light-emitting diodes. The U.S. Department of Energy says that LEDs had 4 percent of the U.S. lighting market in 2013, but it predicts this figure will rise to 74 percent of all lights by 2030.

Because LEDs are solid-state devices that emit light from a semiconductor chip, they already sit on a circuit board. That means they can readily share space with sensors, wireless chips, and a small computer, allowing light fixtures to become networked sensor hubs.

For example, last year Philips gave outside developers access to the software that runs its Hue line of residential LED lights. Now it’s possible to download Goldee, a smartphone app that turns your house the color of a Paris sunset, or Ambify, a $2.99 app created by a German programmer that makes the lights flash to music as in a jukebox.

That’s a very different kind of business from selling light bulbs, as Philips has done since 1891. “With the new digitization of light, we have only begun to scratch the surface on how we can control it, integrate it with other systems, and collect rich data,” says Brian Bernstein, Philips’s global head of indoor lighting systems.

Another look at how lighting systems are changing will emerge this November, when a 14-story regional headquarters for Deloitte, nearing completion in Amsterdam, will be festooned with networked LEDs in each fixture—the first such installation for Philips.

Each of 6,500 light fixtures will have an IP address and five sensors—all of them wired only to Ethernet cables. (They’ll use “power over Ethernet” technology to deliver the juice to each fixture as well as data.) The fixtures include a light sensor to dim the LEDs during the day, and a motion detector that covers the area directly beneath each light and turns the light off when no one is there. “We expect to spend 70 percent less on light, because systems [give] us much more control,” says Erik Ubels, chief information officer at Deloitte in the Netherlands. Additional sensors in the LED fixtures can monitor temperature, humidity, carbon dioxide, and heat, turning the lights into a kind of building-management system.

Prices for LEDs are high but falling quickly. A “dumb” LED that puts out as much light as a $1.25 incandescent bulb now sells for $9 (but uses one-sixth the energy and lasts much longer). That’s down from $40 each a couple of years ago. A connected LED bulb from Philips’s Hue line retails in the U.S. for $59. But these will get cheaper, too. Philips says a third of its lighting revenue now comes from LEDs, and about 1.7 percent from the newer LEDs that can connect to the Internet.

Many other uses are being explored. A department store in Dusseldorf, Germany, is using LEDs to send out light frequencies that communicate with shoppers’ smartphones. Philips has placed street lights in Barcelona that react to how many people are strolling by.

 

 

Digital Summit: Being Human in the Future

 

Intel’s chief anthropologist frames the MIT Technology Review Digital Summit by talking about the values that change, and those that don’t, as technology progresses.

 Even as the Internet of things, new interfaces, online health services, and other technological trends develop remarkably quickly, technology companies often forget that the users of these services change relatively slowly. With that observation, Genevieve Bell, the anthropologist who leads user-experience research at Intel, opened the MIT Technology Review Digital Summit today in San Francisco.

Bell told editor in chief Jason Pontin that a better appreciation of fundamental human desires–we all want to be part of a community that shares our values, for instance, and “we like to keep secrets and tell lies”–would make technologists less breathless and more honest about the potential for “smart cities,” connected cars, and other ideas that will be aired at the summit.

Here’s hoping that Bell’s way of framing technological change remains in the air throughout the summit today and tomorrow.

Apple to make 3-5 million smartwatches monthly, sales begin October: report

(Reuters) – Apple Inc is preparing to sell its first wearable device this October, aiming to produce 3 million to 5 million smartwatches a month in its initial run, the Nikkei reported on Friday, citing an unidentified parts supplier and sources familiar with the matter.

Specifications are still being finalized, but the devices are likely to sport curved OLED (organic light-emitting diode) displays and sensors that collect health data from blood glucose and calorie consumption to sleep activity, the Japanese news service cited industry sources as saying.

The industry has long expected Apple to unveil some sort of smartwatch, following the release of Samsung Electronics’ Galaxy Gear watches.

Wall Street is hoping to see a new Apple product this year to galvanize the former stock market darling’s share price and end a years-long drought of ground-breaking devices. CEO Tim Cook has promised “new product categories” in 2014.

Apple declined to comment.

 

 

Quantum Computing Research May Back Controversial Company

A quantum computer without its protective thermal canister. The kind of computing D-Wave pursues seeks to define a problem’s answer in terms of an optimal outcome among a near-infinitude of possibilities.

Kim Stallknecht for The New York TimesA quantum computer without its protective thermal canister. The kind of computing D-Wave pursues seeks to define a problem’s answer in terms of an optimal outcome among a near-infinitude of possibilities.

In the fight to prove it really has developed a quantum computer,D-Wave Systems may have won a big round.

In a paper published on arXiv.org, researchers from University College London, and the University of Southern Californiacompared the data obtained from a D-Wave computer with three possible explanations from classical physics, and one from the type of quantum computing that D-Wave is pursuing. None of the classical outcomes fit the data, while the quantum outcome fit the data from the machine.

If proved out, the results will be a significant boost for D-Wave, a 13-year-old company that has won contracts with Lockheed Martinand a joint research endeavor by Google and the National Aeronautics and Space Administration, as well as significant skepticism from many academic quantum physicists. Taking an applied approach sharply different from that previously tried by most academics, D-Wave has reported computing gains vastly better than anyone else.

Scientists at D-Wave, which is based in British Columbia, were quick to cheer the results.

“This is analogous with the transition from classical physics to relativity,” said Colin Williams, a quantum physicist who serves as D-Wave’s director of business development. “This slays the objections.”

Not entirely. “What I think is going on here is that they didn’t model the ‘noise’ correctly,” said Umesh Vazirani, a professor of computer science at the University of California, Berkeley. He was referring to possible distortions to the data caused by a technique in the experiment that effectively turned up the temperature, affecting the behavior of qubits. “One should have a little more respect with the truth.”

Quantum computing proposes to solve problems using properties of quantum physics that differ from the classical understanding of matter. In quantum physics, subatomic particles can inhabit a range of states, even at the same time, and appear to affect one another over great distances – even, it is posited, across different universes.

The kind of computing D-Wave pursues seeks to define a problem’s answer in terms of an optimal outcome among a near-infinitude of possibilities. This is analogous to the way a plant uses quantum processes to efficiently harvest sunlight for carbohydrates in photosynthesis

Mr. Vazirani seemed more charitable to D-Wave than in the past. In January, he and others published a paper of their own that indicated D-Wave had not cracked quantum computing. Since then, he said, “after talking with them I feel a lot better about them. They are working hard to prove quantum computing.”

In a paper also published on arXiv.org in January, scientists from D-Wave, U.S.C., and the University of British Columbia said they had found evidence in their machine of quantum entanglement, which is when one unit, or qubit, can affect another without direct contact.

“None of the classical models show what D-Wave hardware does,” said Mr. Williams.

While the controversy has gone on for years, it may reach an end within a year or two. Mr. Williams said that in April, D-Wave will be conducting experiments with a machine managing over 1,000 qubits, about twice the number currently inside its best machine.

A 2,000-qubit machine is scheduled for the end of the year, and will be ready for experiments within a few months after that. If D-Wave can rapidly solve the kind of large problems a machine like that is expected to, that would perhaps be the most persuasive evidence possible that we have entered a new computing era.

How Advanced Mobile Networks Could Power Themselves

 

Cellular networks guzzle electricity and diesel fuel, but researchers are showing how new versions could be cleaner but still reliable.

By David Talbot on March 24, 2014

A dirty secret of mobile communications is that it uses lots of electricity. It’s also sometimes powered by giant tanks of diesel fuel, especially in poor countries. But new research shows that it’s possible to build complex networks that run on renewable or other local power sources, with no need for backup from the electricity grid or diesel fuel.

Small transmitters, called small cells, will provide much of the capacity for future networks. These small cells might be tethered to small windmills or solar panels and batteries, or even include their own built-in power sources and batteries, as in this prototype. But fluctuations in the electricity generated at each cell could make the networks they serve less reliable.

The model, led by USC postdoc Harpreet Dhillon, has been accepted forpublication in IEEE Transactions on Wireless Communications. It’s still theoretical and requires a practical demonstration, but the work is “very important,” says Jeff Reed, director of the wireless research center at Virginia Tech. “One of the chief obstacles of setting up modern communications in emerging countries is finding a steady source of power. He has shown theoretically the mechanism that allows renewable energy harvesting alone to power the network.”

Some estimates hold that telecommunications accounts for 1 percent of human carbon emissions and that energy consumption related to telecommunications will triple in the next several years. In some markets, energy-related costs already account for as much as half of a mobile network operator’s expenses. “We need to continue to pursue improvements in energy harvesting technologies and energy storage and battery technologies; however the bigger impact will come from reduced energy consumption of the network equipment itself,” says Thierry Klein, network energy research program leader at Alcatel-Lucent’s Bell Labs.

Some simple networks are already powered entirely by renewable energy and batteries, such as one in a Zambian village that uses a single low-power base station (see “A Tiny Cell Phone Transmitter Takes Root in Rural Africa”). “We take the approach of trying to ramp down the power requirements of the network so that with solar and battery we can make it through the night and a couple days of a rainstorm,” says Vanu Bose, whose company, Vanu, deployed that network. “There is no diesel there, but we did it by really reducing the power consumption of the base station overall.”

The concept should gain traction as more kinds of small base stations, called pico cells and femto cells, get physically closer to users and need only a few watts of power, making renewable technologies more feasible as the main energy source.

Palo Alto Networks to buy Israeli cybersecurity firm for $200 million

Mon Mar 24, 2014 7:21am EDT

Security software maker Palo Alto Networks Inc (PANW.N) said it agreed to buy privately-held Israeli cybersecurity company Cyvera for about $200 million to expand its offerings that protect businesses from cyber attacks.

Palo Alto said the deal would close in the second half of its fiscal year 2014.

Palo Alto said Cyvera’s software – which protects businesses from cyber threats by blocking unknown, zero-day attacks – would help its customers to safely enable applications and protect them against threats on any device, across any network.

Zero-day cyber attacks exploit a vulnerability in computer systems and networks known only to the attacker.

Tel Aviv-based Cyvera has 55 employees, Palo Alto said.

(Reporting by Soham Chatterjee in Bangalore; Editing by Kirti Pandey)

Cisco joins cloud computing race with $1 billion plan

A visitor walks past a Cisco advertising panel as she looks at her mobile phone at the Mobile World Congress in Barcelona February 27, 2014. REUTERS/Albert Gea

A visitor walks past a Cisco advertising panel as she looks at her mobile phone at the Mobile World Congress in Barcelona February 27, 2014.

 Cisco Systems Inc plans to offer cloud computing services, pledging to spend $1 billion over the next two years to enter a market currently led by the world’s biggest online retailerAmazon.com Inc.

Cisco said it will spend the amount to build data centers to help run the new service called Cisco Cloud Services.

Cisco, which mainly deals in networking hardware, wants to take advantage of companies’ desire to rent computing services rather than buying and maintaining their own machines.

The company said it plans to deliver the service with and through partners including Australian telecom service provider Telstra, tech distributor Ingram Micro Inc, and Indian IT company Wipro Ltd.

“Customers, providers and channel partners … want to rapidly deploy valuable enterprise-class cloud experiences for key customers — all while mitigating the risk of capital investment,” Rob Lloyd, Cisco’s president of development and sales, said in a statement.

Cisco’s plans were first reported by the Wall Street Journal.

Enterprise hardware spending is dwindling across the globe as companies cope with shrinking budgets, slowing or uncertain economies and a fundamental migration to cloud computing, which reduces demand for equipment by outsourcing data management and computing needs.

Microsoft Corp last year said it was cutting prices for hosting and processing customers’ online data in an aggressive challenge to Amazon’s lead in the growing business of cloud computing.

Shares of Cisco, which closed at $21.64 on Friday on the Nasdaq, were up 0.28 percent at $21.70 in pre-market trading on Monday.

(Reporting by Arnab Sen and Supriya Kurane in Bangalore; Editing by Gopakumar Warrier and Savio D’Souza)

Graphene Helps Copper Wires Keep Their Cool

 

An exotic form of carbon could help relieve a growing problem with the copper used in computer processors.

WHY IT MATTERS

 

When people in the chip industry talk about the thermal problems in computer processors, they get dramatic. In 2001, Pat Gelsinger, then vice president of Intel, noted that if the temperatures produced by the latest chips kept rising on their current path, they would exceed the heat of a nuclear reactor by 2005, and the surface of the sun by 2015. Fortunately, such thermal disaster was averted by slowing down the switching speeds in microprocessors, and by adopting multicore chip designs in which several processors run in parallel.

Now the semiconductor industry has another thermal problem to sort out. As chip components shrink, the copper wiring that connects them must shrink, too. And as these wires get thinner, they heat up tremendously.

A potential solution to this interconnect fever has been found in the form of graphene, an exotic material made from single-atom-thick sheets of carbon that is a superlative conductor of both electrons and heat.

Materials scientists already use copper as a catalyst to grow graphene for other uses. So Alexander Balandin of the University of California, Riverside, and Kostya Novoselov, a physicist at University of Manchester, U.K., who won the 2010 Nobel Prize in Physics for his foundational work with graphene (see “Graphene Wins Nobel Prize”), decided to leave the graphene on the copper to see how it affected the metal’s thermal properties. In a paper published in the journal Nano Letters, they report that a sandwich made of graphene on both sides of a sheet of copper improves the copper’s ability to dissipate heat by 25 percent—a significant figure for chip designers.

Balandin says that the graphene itself doesn’t seem to conduct the heat away. Rather, it alters the structure of the copper, improving the metal’s conductive properties. Heat moving through copper is usually slowed by the crystalline structure of the metal. Graphene changes this structure, causing those walls to move farther apart, and allowing heat to flow more readily, says Balandin.

Studies were done with relatively thick sheets of copper—much larger than the copper wires found in computer chips—but Balandin expects that the heat-conducting effect will be seen in thinner copper wires, too. He’s now working on copper-graphene wires as small as those used in commercial computer chips.

The problem is an urgent one. This year, Intel is expected to announce products containing 14-nanometer transistors, with copper interconnects about on this scale or even smaller. Copper wires will not work below 10 nanometers, and it’s not clear what will. “We haven’t yet found an interconnect material that can work beyond 10 nanometers,” partly due to overheating, says Saroj Nayak, a physicist at the Center for Integrated Electronics at the Rensselaer Polytechnic Institute in Troy, New York.

Majeed Foad, an electrical engineer at Applied Materials, a semiconductor-equipment maker headquartered in Santa Clara, California, who helps the company track research on new materials, says graphene’s properties are exciting, but adds that as chip components are miniaturized, they become more sensitive to high temperatures. It takes a lot of heat to make good quality graphene—Balandin and Novoselov heated their wires to over 1,000 °C. Foad says such temperatures would degrade transistors and other chip components. Balandin, however, points to lab experiments that demonstrate that graphene can be grown at lower temperatures, at least in the research setting.

Regardless, Foad says, chip makers won’t be in any rush to embrace graphene. “Changing materials is very painful, so we will squeeze every last drop of performance out of what we have,” he says.

 

Sony Joins Virtual Reality Race with New Headset for PlayStation

 

Inspired by Oculus Rift, Sony is adding virtual reality to the PlayStation 4.

Sony gaming headset

Sony unveiled its long-rumored virtual reality headset on Tuesday at the 2014 Game Developer’s Conference in San Francisco. Shuhei Yoshida, president of Sony’s Worldwide Studios, stood in front of a packed auditorium of game developers and said: “Virtual reality is the next innovation from PlayStation that could shape the future of video games.”

Code-named Project Morpheus (a name Yoshida admitted the company only settled on within the past few weeks), the headset will work with Sony’s PlayStation 4 video game console (see “Xbox vs. PlayStation: Beginning of the End for Consoles?”). The headset, which Sony said has been in development for three years, will use inertial sensors built into the head-mounted unit and the PlayStation camera to track a user’s orientation and movement. As the player’s head rotates, the image of the virtual world rotates in real-time.

The headset includes a five-inch LCD panel and 90-degree field of vision. It contains a gyroscope, an accelerometer, and 3-D audio. Morpheus has a light, slick design: a black, rounded visor that hangs solidly from a white curved headband. However, Sony has been quick to point out that neither the specifications nor design are fixed as yet.

“Morpheus enables developers to create experiences that deliver a sense of presence—where players feel as though they are physically inside the virtual world of a game,” said a Sony spokesperson. “Presence is like a window into another world that heightens the emotions gamers experience as they play.”

Richard Marks, one of Project Morpheus’s creators, described his experience with the technology: “When I first experienced presence it shifted my skepticism into complete belief.”

The decision to unveil the technology at an event for game developers rather than to the general public was a tactical one. Sony no doubt hopes to replicate the groundswell of independent developer support that created such a buzz around Oculus Rift. This rival headset, first announced in 2012 and slated for release later this year, is a PC-compatible device that has dominated the conversation around virtual reality (see “Virtual Reality Startups Look Back to the Future”). Sony’s Yoshida paid tribute to Oculus Rift in his presentation. “I have an enormous amount of respect for them,” he said. “We were inspired in our work by the enthusiastic reactions of developers and journalists who tried their prototypes.”

Enthusiasm for Oculus Rift has been tempered by some skepticism. Some observers argue that VR is a gimmick that soon wears thin—as evidenced by the technology’s disappearance in the 1990s. Others complain that such headsets often make players nauseated.

Even so, Sony’s announcement adds to the sense that VR’s time has come. While Oculus Rift has the sizeable benefit of being first, Project Morpheus will benefit from Sony’s marketing clout and the installed base of six million PlayStation 4 owners. And unlike Oculus Rift, Sony’s device will benefit from running on hardware with fixed specifications.