The First Carbon Nanotube Computer

A carbon nanotube computer processor is comparable to a chip from the early 1970s, and may be the first step beyond silicon electronics

By Katherine Bourzac on September 25, 2013

For the first time, researchers have built a computer whose central processor is based entirely on carbon nanotubes, a form of carbon with remarkable material and electronic properties. The computer is slow and simple, but its creators, a group of Stanford University engineers, say it shows that carbon nanotube electronics are a viable potential replacement for silicon when it reaches its limits in ever-smaller electronic circuits.

The carbon nanotube processor is comparable in capabilities to the Intel 4004, that company’s first microprocessor, which was released in 1971, says Subhasish Mitra, an electrical engineer at Stanford and one of the project’s co-leaders. The computer, described today in the journal Nature, runs a simple software instruction set called MIPS. It can switch between multiple tasks (counting and sorting numbers) and keep track of them, and it can fetch data from and send it back to an external memory.

The nanotube processor is made up of 142 transistors, each of which contains carbon nanotubes that are about 10 to 200 nanometer long. The Stanford group says it has made six versions of carbon nanotube computers, including one that can be connected to external hardware—a numerical keypad that can be used to input numbers for addition.

Aaron Franklin, a researcher at the IBM Watson Research Center in Yorktown Heights, New York, says the comparison with the 4004 and other early silicon processors is apt. “This is a terrific demonstration for people in the electronics community who have doubted carbon nanotubes,” he says.

Franklin’s group has demonstrated that individual carbon nanotube transistors—smaller than 10 nanometers—are faster and more energy efficient than those made of any other material, including silicon. Theoretical work has also suggested that a carbon nanotube computer would be an order of magnitude more energy efficient than the best silicon computers. And the nanomaterial’s ability to dissipate heat suggests that carbon nanotube computers might run blisteringly fast without heating up—a problem that sets speed limits on the silicon processors in today’s computers.

Still, some people doubt that carbon nanotubes will replace silicon. Working with carbon nanotubes is a big challenge. They are typically grown in a way that leaves them in a tangled mess, and about a third of the tubes are metallic, rather than semiconducting, which causes short-circuits.

Over the past several years, Mitra has collaborated with Stanford electrical engineer Phillip Wong, who has developed ways to sidestep some of the materials challenges that have prevented the creation of complex circuits from carbon nanotubes. Wong developed a method for growing mostly very straight nanotubes on quartz, then transferring them over to a silicon substrate to make the transistors. The Stanford group also covers up the active areas of the transistors with a protective coating, then etches away any exposed nanotubes that have gone astray.

Wong and Mitra also apply a voltage to turn all of the semiconducting nanotubes on a chip to “off.” Then they pulse a large current through the chip; the metallic ones heat up, oxidize, and disintegrate. All of these nanotube-specific fixes—and the rest of the manufacturing process—can be done on the standard equipment that’s used to make today’s silicon chips. In that sense, the process is scalable.

Late last month at Hot Chips, an engineering design conference hosted, coincidentally, at Stanford, the director of the Microsystems Technology Office at DARPA made a stir by discussing the end of silicon electronics. In a keynote, Robert Colwell, former chief architect at Intel, predicted that by as early as 2020, the computing industry will no longer be able to keep making performance and cost improvements by doubling the density of silicon transistors on chips every 18 to 24 months—a feat dubbed Moore’s Law after the Intel cofounder Gordon Moore, who first observed the trend.

 

Mitra and Wong hope their computer shows that carbon nanotubes may be a serious answer to the question of what comes next. So far no emerging technologies come close to touching silicon. Of all the emerging materials and new ideas held up as possible saviors—nanowires, spintronics, graphene, biological computers—no one has made a central processing unit based on any of them, says Mitra. In that context, catching up to silicon’s performance circa 1970, though it leaves a lot of work to be done, is exciting.

Victor Zhirnov, a specialist in nanoelectronics at the Semiconductor Research Corporation in Durham, North Carolina, is much more cautiously optimistic. The nanotube processor has 10 million times fewer transistors on it than today’s typical microprocessors, runs much more slowly, and operates at five times the voltage, meaning it uses about 25 times as much power, he notes.

Some of the nanotube computer’s sluggishness is due to the conditions under which it was built—in an academic lab using what the Stanford group had access to, not an industry-standard factory. The processor is connected to an external hard drive, which serves as the memory, through a large bundle of electrical wires, each of which connects to a large metal pin on top of the nanotube processor. Each of the pins in turn connects to a device on the chip. This messy packaging means the data has to travel longer distances, which cuts into the efficiency of the computer.

With the tools at hand, the Stanford group also can’t make transistors smaller than about one micrometer—compare that with Intel’s announcement earlier this month that its next line of products will be built on 14-nanometer technology. If, however, the group were to go into a state-of-the-art fab, its manufacturing yields would improve enough to be able to make computers with thousands of smaller transistors, and the computer could run faster.

To reach the superb level of performance theoretically offered by nanotubes, researchers will have to learn how to build complex integrated circuits made up of pristine single nanotube transitors. Franklin says device and materials experts like his group at IBM need to start working in closer collaboration with circuit designers like those at Stanford to make real progress.

“We are well aware that silicon is running out of steam, and within 10 years it’s coming to its end,” says Zhirnov. “If carbon nanotubes are going to become practical, it has to happen quickly.”

Report: Twitter to List $1.5B IPO on New York Stock Exchange

By Matt Egan

Hoping to avoid the fiasco of Facebook’s (FB) initial public offering, micro-blogging site Twitter has reportedly decided to raise about $1.5 billion in an IPO listed on the New York Stock Exchange.

Despite the report from TheStreet.com, sources told FOX Business a final decision about where to list the highly-anticipated IPO hasn’t been made and neither exchange has been contacted by Twitter.

Losing out on the highly-anticipated Twitter debut would be a blow to Nasdaq OMX Group (NDAQ), which was scarred by a recent glitch that caused a three-hour trading freeze for all Nasdaq-listed stocks.

The $1.5 billion figure represents just a fraction of the record-breaking $16 billion raised by Facebook’s IPO, which was marred by technical glitches on the Nasdaq Stock Market.

According to TheStreet.com, Twitter may sell between 50 million to 55 million shares at between $28 and $30 a share.

That range would allow the San Francisco social media company to raise between $1.4 billion and $1.65 billion and give the company a valuation of about $15 billion to $16 billion.

Representatives from Twitter didn’t respond to a request for comment on the report.

At this point neither Nasdaq nor NYSE have been informed about a listing decision by Twitter, and it’s possible discussions are ongoing.

“We don’t comment on filings at this early juncture,” said a Nasdaq spokesperson.

NYSE Euronext (NYX) said, “We do not comment on speculation.”

Twitter confidentially filed with the Securities and Exchange Commission for an IPO earlier this month, paving the way for the most anticipated debut since Facebook’s offering in May 2012.

Companies with less than $1 billion in revenue can file for an IPO without making their records public right away under the 2012 Jumpstart Our Business Startups (JOBS) Act.

Research firm e-Marketer estimates Twitter will generate about $582.8 million in 2013 ad revenue and nearly $1 billion next year.

Ahead of its IPO, Twitter is reportedly seeking a revolving credit line worth $500 million to $1 billion from JPMorgan Chase (JPM) and Morgan Stanley (MS).

Microsoft purchases Nokia for $7.2 billion

By Mike Flacy

Announced on the Microsoft News Center as well as a joint letter from Microsoft CEO Steve Ballmer and Nokia CEO Stephen Elop on the Official Microsoft Blog, the software company is acquiring Nokia’s Devices & Services business and the right to license Nokia’s patents to other entities. Assuming the deal is approved by Nokia’s shareholders and regulatory agencies, Microsoft will spend approximately $7.2 billion on the acquisition. Specifically, Microsoft will spend 3.79 billion Euros on the mobile devices unit and 1.65 billion Euros on Nokia’s patent portfolio. However, Nokia will continue to create cellular networking equipment, build maps and location-based services, and create other technology outside of the mobile devices unit.

Ballmer and Elop together state: “With the commitment and resources of Microsoft to take Nokia’s devices and services forward, we can now realize the full potential of the Windows ecosystem, providing the most compelling experiences for people at home, at work, and everywhere in between …We will continue to build the mobile phones you’ve come to love, while investing in the future – new phones and services that combine the best of Microsoft and the best of Nokia.”

In a separate email to Microsoft employees, Ballmer says that ”This is a smart acquisition for Microsoft, and a good deal for both companies. We are receiving incredible talent, technology and IP. We’ve all seen the amazing work that Nokia and Microsoft have done together.”

Ballmer went on to mention that Stephen Elop, the current CEO of Nokia, will be returning to Microsoft to manage the entire devices team. Many analysts believe that Elop is on the short list to become the next CEO of Microsoft after Ballmer steps down from the position in the next twelve months.

Microsoft hopes to use Nokia’s resources and technology to carve itself a much larger share of the smartphone market, which is currently led by Google’s Android operating system, and Apple’s iPhone. As detailed by Forbes, Microsoft has seen significant Windows Phone growth in European markets, but continues to lag far behind Android and iOS in the United States. Specifically, Microsoft’s market share in the United States is just 3.5 percent.

Regarding exclusivity of the Windows Phone platform on Nokia branded devices, Microsoft will continue to license usage rights of the the Window Phone platform to other companies, according to a blog post created by Microsoft VP Terry Myerson.

Voice-Analyzing App Scans Football Players for Concussion

Notre Dame researchers will test a concussion-detection app on nearly a thousand high school and youth football players.

By Susan Young

A voice-analysis program run on a tablet could help high school and youth coaches recognize concussions on the sidelines of football and other high-impact sport games.

After identifying concussions in collegiate boxers in a preliminary study, University of Notre Dame researchers will soon test the app on approximately 1,000 youth and high school football players. The program pulls out the vowel segment from a set of predetermined words and then analyzes that sound for changes that may indicate a brain injury.

 

Despite all the attention given to the issue in recent years, concussions are still a “highly underrecognized injury,” says Gerry Gioia, a pediatric neuropsychologist at Children’s National Medical Center in Washington, D.C.

The Centers for Disease Control and Prevention (CDC) estimates that as many as 3.8 million sports-related concussions occur in the U.S. each year; but because concussion can go undiagnosed, the true number of such injures could be much higher. Most concussions are not accompanied by loss of consciousness, and the variety of symptoms can be subtle and difficult to spot. But catching concussion can be critically important for athletes, since it can put them at greater risk for another injury. Problems with memory and mental agility associated with concussion get worse with repeated concussions.

“The issue is omnipresent, but when it actually happens, it’s not uncommon for parents or coaches to get confused about what they should really be looking for,” Gioia says. Working with the CDC, Gioia has developed a question-and-answer style app to guide parents or coaches through potential symptoms of concussion and what they should do next. While professional and collegiate football teams have physicians and athletic trainers waiting on the sidelines to run psychological and cognitive tests on players who have taken a heavy hit, youth and high school teams usually do not. Gioia’s app provides a checklist of signs and symptoms to determine whether a player needs to stop playing and go see a doctor.

Researchers at Notre Dame wanted a test for concussion that could not be swayed by answers from a player who wants to stay in the game. Graduate student Nikhil Yadav designed a diagnostic tool that requires someone to simply speak into a mobile device such as a tablet.

Previous studies have found that head injuries change speech characteristics, with negative effects on vowel production in particular. The researchers initially tested the app with 125 boxers participating in a collegiate competition. Before any bouts started, the researchers recorded each boxer saying the numbers one through nine as a baseline. After boxing, the researchers recorded the athletes saying the same words again. By analyzing several acoustic features of the vowel sounds, including their pitch, the app was able to identify all nine players who were later diagnosed with concussion.

“The preliminary results were very promising,” Yadav says. The test wasn’t perfect, however; it also falsely identified concussions in three boxers. “That’s low in this early stage, but we don’t want to see false positives,” says Yadav. He hopes to fine-tune the test to minimize them.

Now, Yadav and colleagues are kicking off a large test of the system with youth and high school football players. They will work with around 1,000 kids between the ages of 10 and 18 in 20 different schools and clubs in Indiana, Illinois, Wisconsin, and Michigan. Any predicted concussions will be compared to medical diagnoses.

If the app proves its worth in this larger test, the researchers plan to turn it into a commercial product through a startup called Contect.

An App for Coasting, Rather than Surfing, the Web

Browser builder Opera smartly simplifies the Web on the iPad with touchable tiles.

By Rachel Metz

While computers have changed drastically over the past 20 years, morphing from big boxes to svelte laptops, touch-screen tablets, and smartphones, the Web browsers we use on them have looked largely the same.

Sure, you can take a desktop Web browser, optimize it for a smaller screen, and add some touch features—as the most commonly used mobile browsers do. But the results are often inelegant, because the things you do online on a laptop or desktop computer tend to be different from the things you do on a tablet or a smartphone. And chances are you’re not using a traditional keyboard and mouse, the tools that desktop browser makers could count on you to have.

Opera, the largest of the small-share browser makers, recognizes this with the recent release of its free Coast browser app for the iPad. Coast represents a major change in the look and feel of a tablet Web browser. It’s the latest in a long line of browsers that have tried new approaches (see “The Browser Wars Go Mobile”), but it may be the first that really makes it work.

As the name implies, Coast is more for sitting back and watching where the Web takes you—truly browsing—than for going to predetermined destinations. The app banishes the standard URL address bar, treating your favorite websites as on-screen tiles and hiding most options. While it could use a bit more polishing, it’s a clever reimagining of how we can experience the Web on a small touch screen.

Coast first surfaced this year in a leaked video showing an early internal presentation of the browser, which Huib Kleinhout, an engineer at Oslo, Norway–based Opera, started as a side project a year and a half ago. In an interview, Kleinhout told me he wanted to simply build a browser “for the Internet of now,” in which websites are more complicated than ever and there are orders of magnitude more pages than there are apps in any app store.

Coast takes cues from apps and mobile operating software to do this: favorite websites appear as a bank of brightly colored square tiles against a dark background, up to nine of them on each virtual screen. That’s plenty if, like me, you visit only a handful of websites regularly.

Tapping a tile for, say, Reddit opens a screen-filling page with no borders or URL bar at the top. Swiping down on the page refreshes the content. Tapping a tiny cluster of nine white squares at the bottom center of the display takes you back to the home screen, while hitting a line of three little white squares on the bottom right allows you to scroll sideways through all tabs you have open.

Viewing the Web this way—without tabs and options constantly in your face—forces you to relax. I found I was more inclined to click around within websites, reading more content on different pages, rather than flitting from one site to the next.

If you want to search for a specific term or site, start typing it into the search bar on Coast’s home screen. The app will offer autocomplete suggestions via Google or let you search for your term on Google itself, and it will present a handful of clickable tiles for websites it thinks you may want to visit (tap the word “new,” for example, and you might get tiles for the online electronics retailer Newegg and a couple of news websites).

 

One sweet feature is the absence of back or forward buttons. Instead, you swipe to the right or left. Smart, right? I also liked how page-sharing features are hidden under an icon on the bottom right of the screen.

It’s a first public version, though, so understandably Coast could use some work. Its city-at-night background and script-like logo feel dated. Pages sometimes froze, and several times the videos I watched through the app were glitchy. Coast failed to detect my intention to open links on several pages.

Still, Coast represents a fresh, enjoyable way to think about browsing on a tablet, with minimal distractions so you can sit back and relax.

Smart Robots Can Now Work Right Next to Auto Workers

It used to be too dangerous to have a person work alongside a robot. But at a South Carolina BMW plant, next-generation robots are changing that.

By Will Knight

BMW has taken a huge step toward revolutionizing the role of robots in automotive manufacturing by having a handful of robots work side-by-side with human workers at its plant in Spartanburg, South Carolina.

As a new generation of safer, more user-friendly robots emerges, BMW’s man-machine collaboration could be the first of many examples of robots taking on new human tasks, and working more closely alongside humans. While many fear that this trend could put people out of work (see “How Technology Is Destroying Jobs”), proponents argue it will instead make employees more productive, relieving them of the most unpleasant and burdensome jobs.

Robots have been a part of automotive manufacturing for decades. The first industrial robot—a hulking 4,000-pound arm called the Unimate—attached die castings to car doors at a GM production line in 1961. Such manufacturing robots have been powerful and extremely precise, but it’s never been safe for humans to work alongside them. As a result, a significant number of final assembly tasks, in auto plants and elsewhere, are still performed almost entirely by hand.

At BMW’s South Carolina plant, robots made by the Danish company Universal Robots have broken through this barrier and are helping workers perform final door assembly. The robots are working with a door sealant that keeps sound and water out of the car, and is applied before the door casing is attached. “It’s pretty heavy work because you have to roll this glue line to the door,” says Stefan Bartscher, head of innovation at BMW. “If you do that several times a day, it’s like playing a Wimbledon match.”

According to Bartscher, final assembly robots will not replace human workers; they will extend their careers. “Our workers are getting older,” Bartscher says. “The retirement age in Germany just rose from 65 to 67, and I’m pretty sure when I retire it’ll be 72 or something. We actually need something to compensate and keep our workforce healthy, and keep them in labor for a long time. We want to get the robots to support the humans.”

In recent years, robot manufacturers have realized that with the right software and safety controls, their products could be made to work in close proximity to humans. As a result, a new breed of more capable workplace robot is rapidly appearing.

One of the most prominent examples is Baxter, made by Rethink Robotics, a Boston-based company founded by the robotics pioneer Rodney Brooks. Baxter has a torso, a head, and two arms; it is safe to work alongside, and it can be taught to perform new tasks simply by moving its arms through an operation (see “This Robot Could Transform Manufacturing”). So far, Baxter has largely been deployed in small U.S. factories, where it helps package items moving along a conveyor. BMW’s effort represents a more significant push into heavy-duty manufacturing.

 

BMW is testing even more sophisticated final assembly robots that are mobile and capable of collaborating directly with human colleagues. These robots, which should be introduced in the next few years, could conceivably hand their human colleague a wrench when he or she needs it. The company is developing the newer robots in collaboration with Julie Shah, a professor in MIT’s department of aeronautics and astronautics. “Oftentimes, the robot will need to maneuver closely around people,” says Shah. “It’ll need to possibly straddle the moving floor—the actual assembly line; it’ll need to track a person that is potentially standing on that assembly line and moving with it.”

Shah’s team has built robots capable of these tasks on a simulated production line at MIT. After the control software has been tested sufficiently at BMW’s lab, the robot will be deployed on one of its real assembly lines. “It’s a fantastic navigation and controls challenge, and it hasn’t been solved before,” Shah says.

Facebook Launches Advanced AI Effort to Find Meaning in Your Posts

A technique called deep learning could help Facebook understand its users and their data better.

By Tom Simonite.

Facebook is set to get an even better understanding of the 700 million people who share details of their personal lives using the social network each day.

A new research group within the company is working on an emerging and powerful approach to artificial intelligence known as deep learning, which uses simulated networks of brain cells to process data. Applying this method to data shared on Facebook could allow for novel features, and perhaps boost the company’s ad targeting.

Deep learning has shown potential to enable software to do things such as work out the emotions or events described in text even if they aren’t explicitly referenced, recognize objects in photos, and make sophisticated predictions about people’s likely future behavior.

The eight-strong group, known internally as the AI team, only recently started work, and details of its experiments are still secret. But Facebook’s chief technology officer, Mike Schroepfer, will say that one obvious place to use deep learning is to improve the news feed, the personalized list of recent updates he calls Facebook’s “killer app.” The company already uses conventional machine learning techniques to prune the 1,500 updates that average Facebook users could possibly see down to 30 to 60 that are judged to be most likely to be important to them. Schroepfer says Facebook needs to get better at picking the best updates due to the growing volume of data its users generate and changes in how people use the social network.

“The data set is increasing in size, people are getting more friends, and with the advent of mobile, people are online more frequently,” Schroepfer told MIT Technology Review. “It’s not that I look at my news feed once at the end of the day; I constantly pull out my phone while I’m waiting for my friend, or I’m at the coffee shop. We have five minutes to really delight you.”

Shroepfer says deep learning could also be used to help people organize their photos, or choose which is the best one to share on Facebook.

Facebook’s foray into deep learning sees it following its competitors Google and Microsoft, which have used the approach to impressive effect in the past year. Google has hired and acquired leading talent in the field (see “10 Breakthrough Technologies 2013: Deep Learning”), and last year created software that taught itself to recognize cats and other objects by reviewing stills from YouTube videos. The underlying deep learning technology was later used to slash the error rate of Google’s voice recognition services (see “Google’s Virtual Brain Goes to Work”).

Researchers at Microsoft have used deep learning to build a system that translates speech from English to Mandarin Chinese in real time (see “Microsoft Brings Star Trek’s Voice Translator to Life”). Chinese Web giant Baidu also recently established a Silicon Valley research lab to work on deep learning.

 

Less complex forms of machine learning have underpinned some of the most useful features developed by major technology companies in recent years, such as spam detection systems and facial recognition in images. The largest companies have now begun investing heavily in deep learning because it can deliver significant gains over those more established techniques, says Elliot Turner, founder and CEO of AlchemyAPI, which rents access to its own deep learning software for text and images.

“Research into understanding images, text, and language has been going on for decades, but the typical improvement a new technique might offer was a fraction of a percent,” he says. “In tasks like vision or speech, we’re seeing 30 percent-plus improvements with deep learning.” The newer technique also allows much faster progress in training a new piece of software, says Turner.

Conventional forms of machine learning are slower because before data can be fed into learning software, experts must manually choose which features of it the software should pay attention to, and they must label the data to signify, for example, that certain images contain cars.

Deep learning systems can learn with much less human intervention because they can figure out for themselves which features of the raw data are most useful to understanding it. They can even work on data that hasn’t been labeled, as Google’s cat recognizing software did. Systems able to do that typically use software that simulates networks of brain cells, known as neural nets, to process data, and require more powerful collections of computers to run.

Facebook’s AI group will work on both applications that can help the company’s products and on more general research on the topic that will be made public, says Srinivas Narayanan, an engineering manager at Facebook helping to assemble the new group. He says one way Facebook can help advance deep learning is by drawing on its recent work creating new types of hardware and software to handle large data sets (see “Inside Facebook’s Not-So-Secret New Data Center”). “It’s both a software and a hardware problem together; the way you scale these networks requires very deep integration of the two,” he says.

How Twitter Can Cash In with New Technology

Twitter seeks to do better at inferring its users’ consumer and political preferences, gender, age, and more.

By David Talbot.

Twitter began selling promoted tweets in 2010, but it has always faced challenges in knowing which of those ads should be delivered to which Twitter accounts. Most Twitter users don’t give up their locations, and many don’t reveal their identities in their profiles. And mining tweets themselves for insights is hard because the language is not only short but filled with slang and abbreviations.

Now, as Twitter plans to sell shares to the public, its success will depend in part on how much better it can get at deciphering tweets. Solving that technological puzzle would help Twitter get better at selling the right promoted messages at the right times, and it could possibly lead to new revenue-producing services.

Twitter hasn’t done badly so far; the analyst firm eMarketer predicts ad revenue will double this year, to $583 million. But the company is still trying to get smarter about analyzing tweets. It has bought startups such as Bluefin Labs, which can tell which TV show—and even which precise airing of a TV advertisement—people have tweeted about (see “A Social-Media Decoder”). It has also invested in companies such as Trendly, a Web analytics provider that reveals how promoted tweets are being read and shared. And just last week, Twitter blogged that it is continually running experiments on how to do better at tasks such as suggesting relevant content.

For its next steps, Twitter might consider tapping the latest academic research. Here are some areas it could concentrate on.

Location

Fewer than 1 percent of tweets are “geotagged,” or voluntarily labeled by users with location coördinates. Much of the time, Twitter can use your computer’s IP address and get a good approximation. But that’s not the same as knowing where you are. In mobile computing, IP addresses are reassigned frequently—and some people take steps to obscure their true IP address.

But recent research has shown that the locations of friends—defined as people you follow on Twitter who are also following you—can be used to infer your location to within 10 kilometers half the time. It turns out that many Twitter friends live near one another, says David Jurgens, a computer scientist at Sapienza University of Rome, who did this research while at HRL Laboratories in Malibu, California. If some of your friends have made geotagged tweets or revealed their location in a Twitter profile, Jurgens says, that may be enough to show where you probably are.

Demographics

Natural-language processing gets better all the time. Hundreds of markers—word choices, abbreviations, slang terms, and letter and punctuation combinations—signify ever-finer strata of demographic groups and their interests.

Some things, like political leanings, are often not hard to figure out from the right hashtags or from sentiments associated with terms like “Obamacare,” says Dan Weld, a computer scientist at the University of Washington.

Meanwhile, Derek Ruths, a computer scientist who explores natural-language processing at McGill University, has recently shown that linguistic cues can identify U.S. Twitter users’ political orientation with 70 to 90 percent accuracy and can even identify their age (within five years) with 80 percent accuracy. For example, words that most strongly suggest someone is between the ages of 25 and 30 include “for,” “on”, “photo,” “I’m,” and “just,” he says. Generally, these users have a somewhat stronger allegiance to grammar than younger, slang-loving users, he says. And as with location, the profiles of the people they follow provide clues to their demographics.

But even if Twitter can make pretty good guesses about 90 percent of its users, “even missing 10 percent means you miss a lot of people,” says Ruths. “If I were Twitter, I’d want to close that 10 percent gap. And you’d want to find out real details like who someone’s mother is. If it’s Mom’s birthday, you want to tell those people how to order flowers. Twitter can’t do that—yet.”

Making Sense of Breaking News

One of the major uses of Twitter is to report on breaking news events (see “Can Twitter Make Money?”). With so many people tweeting little nuggets of news and other current information, tools have even been built to tease out play-by-play sports action (see “Researchers Turn Twitter into Real-Time Sports Commentator”).

But in major emergencies—like a terrorist attack or earthquake—so many tweets are generated that making sense of them in real time is tricky. Twitter might highlight the most meaningful ones, to cement itself as a must-visit service, but how?

A group at the University of Colorado, Boulder, is using natural-language processing to highlight the most relevant tweets in a disaster. Recent research shows significant progress in differentiating tweets about personal reflections, emotional expressions, or prayers from ones containing hard information about where a fire is burning or whether medical supplies are needed.

In one project, the group was able to identify valuable, news-containing tweets with 80 percent accuracy; these tend to contain language that is formal, objective, and lacking in personal pronouns. Last year they extended that work to classify the important tweets by categories such as damage reports, requests for aid, and advice. “We are trying to figure out which tweets have the most useful information to the people on the ground,” says Martha Palmer, a professor of linguistics and computer science at Boulder.

The Evolution of Ad Tech

Going from Mad Men to Math Men. How technology has fundamentally changed the art and science of advertising.

Once upon a time the marriage of advertising to media was a simple party for two.  And even when the traditional media landscape expanded to online, marketers continued to work directly with publishers’ sales teams to buy advertising space. After all, the golden rule was “media as proxy for audience.”

But then the scale of the Internet exploded exponentially. One hundred billion ad impressions (each time an online ad is displayed is an impression) reach the market every single day, presenting 100 billion opportunities to place those ads. According to comScore, that added up to nearly 6 trillion display ad impressions delivered in 2012.

Something else happened as a result of the Internet’s growth: voluminous amounts of data appeared and so did the opportunity to use it for finding and targeting specific online consumers. At last, marketers delighted; the right ads could be delivered to the “right” people, anywhere they appeared online. To do this, marketers would analyze the data to determine patterns of consumer behavior and pinpoint what products or services the user was most likely to respond to in order to influence sales.

With all this new online advertising inventory inevitably came unsold ad space, so called “remnant inventory.” Around 2001 ad networks emerged to help facilitate the purchase of that remnant space in bulk from publishers, and the sale of that space to marketers.  But there were problems. The networks did not allow much room for transparency upfront, making it harder for marketers to determine who was really seeing their ads. When some of the weaknesses of the network model started to become exposed, the marketplace reacted by introducing ad exchanges and real-time bidding (RTB) in 2007.  Ad exchanges and RTB allowed advertisers to bid for advertising space via an auction model and deliver the ad impressions that were won in milliseconds—all behind the screens during the time it took for the online user to download a webpage. It also created new opportunities for targeting, as more data about the audience viewing the ad was being shared with the marketer in order to create demand and thus determine a fair market price for the ad space.

The big promise of real-time bidding in online advertising is increased efficiency, increased effectiveness, and ultimately, increased profits for the advertiser and a tidy sum for the publisher as well. And it’s that promise that has poured so much cash and attention into the ad tech space.

“It’s about getting marketers closer to their customers. The ability to give them more information about their audience so they can make more informed decisions, both with regard to when and where they deliver their message, and at what price,” said Edward Montes, CEO of Digilant.

Kirk McDonald, President of Pubmatic, added, “It’s not art or science or going from ‘Mad Men’ to ‘Math Men’. It’s about balancing art and science for balanced decisions.”

Both Pubmatic and Digilant are players in the complex new system of ad tech companies facilitating RTB. With the scale of online advertising and the volume of data growing so dramatically, it’s becoming a technically intensive game to compete with one another. Companies require better, faster machine learning, smarter people, and a solid backing of cash to get up and running. However, beckoned by potential opportunity, new companies are entering an increasingly crowded field, eager to take part of the roughly $2 billion and growing annual pie.

“The difficulty is everyone jumping on the bandwagon at the same time. There is a beguiling set of companies you have to be familiar with,” said Jon Slade, Commercial Director for Digital Advertising at The Financial Times.

The competition “is almost like an arms race,” said Scott Neville, Chief Marketing Officer atIPONWEB.

In a new industry with little in the way of standards and great variation among companies supposedly in the same category, a clear picture of the space can be elusive. In particular what can get lost is who gets paid for what, who does what, and which are the most effective and honest players currently operating in the market.

Tom Hespos, founder of Underscore Marketing and among the more critical voices of the industry, said, “For many, digital advertising has become a black box where they dump money and hope for the best.”

And with so many hands trying to get a cut of the industry, there are growing calls for more consolidation and more transparency.

Historically, trends tend to swing from one extreme to the other before settling back in the middle. At this time the industry is still swinging towards the machines. But, there is a drag on the pendulum back to requiring humans to interpret data, as ultimately ad placement is still not about machines selling to machines, but about humans selling products to humans.

Twitter Plans to Go Public

Twitter is the next giant social network with plans to cash in.

Twitter today said it had officially submitted paperwork for a planned public offering of stock. The company disclosed that it had filed the documents via a Tweet at 6 p.m.

A Twitter IPO could be the most anticipated technology stock offering since Facebook went public in May 2012, and things could get just as complicated.

Facebook’s stock sagged, then clawed back up, as the company grappled with whether it could successfully advertise on mobile devices (see “How Facebook Slew the Mobile Monster”). Facebook is worth $108 billion today.

Earlier this year, Twitter was valued by some investors at $9.8 billion. But it could be worth much more than that now.

In the lead-up to its IPO plans, Twitter has become more aggressive about advertising on the site.  For instance, in July, Twitter announced a new product called TV ad targeting, which lets advertisers aim messages at users that are mentioning certain TV programs or ads (see “Now Television Advertisers Know You’re Tweeting” and “A Social-Media Decoder”).

Twitter has played an increasingly important role as a source of news and information, including in countries roiled by protests and uprisings, where the service is used by organizers (see “Streetbook”). It is blocked in China.

An IPO will increase pressure on Twitter to raise revenues from advertising—and use technologies to track what people are doing, saying, and watching. That could bring it into conflict with some users, including those who switched to the site because it seemed less commercial.

Earlier this year, Twitter’s advertising revenues were estimated at $582 million with half from people accessing the site from mobile devices. Alexa ranks Twitter in 10th position among the most popular websites.