Software Mines Science Papers to Make New Discoverie

Software digests thousands of research papers to accurately identify proteins that could prove valuable cancer drug targets.

 

 Software that read tens of thousands of research papers and then predicted new discoveries about the workings of a protein that’s key to cancer could herald a faster approach to developing new drugs.
The software, developed in collaboration between IBM and Baylor College of Medicine, was set loose on more than 60,000 research papers that focused on p53, a protein involved in cell growth implicated in most cancers. By parsing sentences in the documents, the software could build an understanding of what is known about enzymes called kinases that act on p53 and regulate its behavior; these enzymes are common targets for cancer treatments. It then generated a list of other proteins mentioned in the literature that were probably undiscovered kinases, based on what it knew about those already identified. Most of its predictions tested so far have turned out to be correct.

“We have tested 10,” Olivier Lichtarge of Baylor said Tuesday. “Seven seem to be true kinases.” He presented preliminary results of his collaboration with IBM at a meeting on the topic of Cognitive Computing held at IBM’s Almaden research lab.

Lichtarge also described an earlier test of the software in which it was given access to research literature published prior to 2003 to see if it could predict p53 kinases that have been discovered since. The software found seven of the nine kinases discovered after 2003.

“P53 biology is central to all kinds of disease,” says Lichtarge, and so it seemed to be the perfect way to show that software-generated discoveries might speed up research that leads to new treatments. He believes the results so far show that to be true, although the kinase-hunting experiments are yet to be reviewed and published in a scientific journal, and more lab tests are still planned to confirm the findings so far. “Kinases are typically discovered at a rate of one per year,” says Lichtarge. “The rate of discovery can be vastly accelerated.”

Lichtarge said that although the software was configured to look only for kinases, it also seems capable of identifying previously unidentified phosphatases, which are enzymes that reverse the action of kinases. It can also identify other types of protein that may interact with p53.

The Baylor collaboration is intended to test a way of extending a set of tools that IBM researchers already offer to pharmaceutical companies. Under the banner of accelerated discovery, text-analyzing tools are used to mine publications, patents, and molecular databases. For example, a company in search of a new malaria drug might use IBM’s tools to find molecules with characteristics that are similar to existing treatments. Because software can search more widely, it might turn up molecules in overlooked publications or patents that no human would otherwise find.

“We started working with Baylor to adapt those capabilities, and extend it to show this process can be leveraged to discover new things about p53 biology,” says Ying Chen, a researcher at IBM Research Almaden.

It typically takes between $500 million and $1 billion dollars to develop a new drug, and 90 percent of candidates that begin the journey don’t make it to market, says Chen. The cost of failed drugs is cited as one reason that some drugs command such high prices (see “A Tale of Two Drugs”).

Lawrence Hunter, director of the Center for Computational Pharmacology at the University of Colorado Denver, says that careful empirical confirmation is needed for claims that the software has made new discoveries. But he says that progress in this area is important, and that such tools are desperately needed.

The volume of research literature both old and new is now so large that even specialists can’t hope to read everything that might help them, says Hunter. Last year over one million new articles were added to the U.S. National Library of Medicine’s Medline database of biomedical research papers, which now contains 23 million items. Software can crunch through massive amounts of information and find vital clues in unexpected places. “Crucial bits of information are sometimes isolated facts that are only a minor point in an article but would be really important if you can find it,” he says.

Lichtarge believes that software like his could change the way scientists conduct and assess new research findings. Scientists currently rely in part on the reputation of the people, institutions, and journals involved, and the number of times a paper is cited by others.

Software that gleans meaning from all the information published within a field could offer a better way, says Lichtarge. “You might publish directly into the [software] and see how disruptive it is,” he says.

Hunter thinks that scientists might even use such tools at an earlier stage, having software come up with evidence for and against new hypotheses. “I think it would really help science go faster. We often waste a lot of time in the lab because we didn’t know every little thing in the literature,” he says.

 

 

Apple acquires Israeli 3D chip developer PrimeSense

(Reuters) – Apple Inc has bought Israel-based PrimeSense Ltd, a developer of chips that enable three-dimensional machine vision, the companies said on Monday, a move that signals gesture-controlled technologies in new devices from the maker of iPhones and iPads.

An Apple spokesman confirmed the purchase but declined to say how much it spent or what the technology will be used for. Israeli media said Apple paid about $350 million for PrimeSense, whose technology powers the gesture control in Microsoft Corp’s Xbox Kinect gaming system.

“Apple buys smaller technology companies from time to time, and we generally do not discuss our purpose or plans,” an Apple spokesman said in an e-mail.

A spokeswoman for PrimeSense said: “We can confirm the deal with Apple. Further than that, we cannot comment at this stage.”

It was the second acquisition of an Israeli company by Apple in less than two years. Apple bought flash storage chip maker Anobit in January 2012.

PrimeSense’s sensing technology, which gives digital devices the ability to observe a scene in three dimensions, was used to help power Microsoft’s Xbox Kinect device.

The Israeli company has licensed the technology to Microsoft but it is unclear how that deal changes with Apple’s acquisition of PrimeSense, which provides the technology behind Kinect’s visual gesture system.

Apple and Microsoft have other licensing deals between them. Microsoft did not return a call seeking comment.

Analysts are expecting PrimeSense’s technology to show up in Apple devices in about 12-18 months from now, potentially in the often-speculated device for the living room such as a television, dubbed iTV by fans.

“While we have not had any more evidence of an iTV coming in the next 6 to 12 months, some sort of living room appliance is in Apple’s future and gesture technology could be critical,” Peter Misek, analyst with Jefferies said in a note to clients.

Apple’s interest in PrimeSense was first reported in July by Israeli financial newspaper Calcalist.

With Nokia, Microsoft to Invest $2B More in Wireless Chips

By Jennifer Booton

FOXBusiness

Microsoft’s (MSFT) $7.2 billion purchase of Nokia’s (NOK) device business will make it one of the world’s biggest buyers of silicon products, helping it to expand its line of tablets and smartphones, according to IHS (IHS).

The Redmond, Wash.–based Windows software maker, which announced plans in September to buy the heart of Nokia’s smartphone business, will buy an estimated $5.9 billion worth of semiconductors in 2014.

That’s up from $3.78 billion this year and $3.55 billion in 2012, making Microsoft the world’s eighth biggest purchaser of chips and enabling it to improve its gadgets and better compete with larger rivals like Samsung and Apple (AAPL).

To give some perspective, Microsoft was just the 13th and 15th largest in 2012 and 2013, far lagging the industry leaders.

Of that $5.9 billion, IHS says Microsoft will use roughly 37% – or about $2.2 billion – on chips for wireless devices like smartphones and tablets, a sharp rise from the mere $110 million in spent on them last year.

“One challenge for Microsoft will be formulating a strategy for success and deeper penetration of its smartphone and tablet lines,” said Myson Robles-Bruce, senior analyst of semiconductor spend and design for IHS.

Microsoft has said that it doesn’t expect the Nokia buy to actually start driving significant improvements to profits until 2016. The deal faces Nokia shareholder approval next Tuesday and remains subject to antitrust approvals.

Yahoo increases share buyback authorization by $5 billion

BY ALEXEI ORESKOVIC

SAN FRANCISCO Tue Nov 19, 2013 5:50pm EST

(Reuters) – Yahoo Inc said it has increased its share repurchase authorization by $5 billion and that it planned to offer $1 billion in convertible notes.

Shares of Yahoo increased 1.6 percent to $35.17 in after hours trading on Tuesday following the announcement.

Yahoo has aggressively repurchased its common stock in recent quarters using cash obtained from selling a portion of its stake in Chinese e-commerce giant Alibaba Group. In the first nine months of 2013, Yahoo spent $3.1 billion on share buybacks.

The buybacks have helped boost Yahoo’s shares roughly 74 percent this year, even as the Web portal’s revenue growth has remained stagnant amid competition from Facebook Inc, Google Inc and Twitter Inc.

Yahoo said the convertible notes will be due in 2018, with interest payable semi-annually in arrears on June 1 and December 1 of each year, beginning on June 1, 2014.

The interest rate and other terms of the senior unsecured notes will be determined at the time of pricing Yahoo said. The company also intends to grant the initial purchasers of the notes the right to buy an additional $150 million in notes.

Microsoft’s Gates highlights tough requirements for new CEO

BY BILL RIGBY

BELLEVUE, Washington Tue Nov 19, 2013 6:28pm EST

(Reuters) – Chairman Bill Gates said on Tuesday he was pleased with Microsoft Corp’s progress in finding a new chief executive but outlined the difficulties in picking the next leader of the world’s largest software company as it seeks to reinvent itself as a mobile computing power.

Gates is part of the four-man committee that gave itself a year to find a successor to Chief Executive Officer Steve Ballmer after he announced his plan to retire in August. Sources close to the process have said the search is down to a handful of candidates, but the company itself has been largely silent.

“We’ve been doing a lot of meetings with both internal and external candidates and we’re pleased with the progress,” said Gates at Microsoft’s annual shareholder meeting in Bellevue, Washington. “We’re looking at a number of candidates and I’m not going to give a timeline today.”

Ballmer said in August he planned to retire within 12 months, and the CEO search committee – headed by lead independent director and former IBM executive John Thompson – tasked itself with finding a replacement by the end of that period. Sources close to the company expect an appointment no later than January.

Gates, who in previous years did not address the shareholders’ meeting with prepared remarks, went on to describe the challenges of finding the right person to lead Microsoft.

“It’s a complex role to fill – a lot of different skills, experience and capabilities that we need,” he said. “It’s a complex global business the new CEO will have to lead. The person has to have a lot of comfort in leading a highly technical organization and have an ability to work with our top technical talent to seize the opportunities.”

Gates paused briefly and choked up with emotion after he thanked Ballmer for his work at the company, saying both he and Ballmer have a commitment “to make sure the next CEO is the right person, for the right time, for the company we both love.” Gates and Ballmer are the only two CEOs in Microsoft’s 38-year history.

Gates, who co-founded Microsoft with Paul Allen in 1975, then left the stage and sat in the front row of an audience of around 400 people, alongside other members of the board. That was a departure from previous years when he remained onstage and occasionally answered questions.

Microsoft has not shed much light on its CEO search, but sources close to the process have told Reuters the company has narrowed its shortlist of candidates to just a handful, including Ford Motor Co chief Alan Mulally and former Nokia CEO Stephen Elop, as well as former Skype CEO and internal candidate Tony Bates, now responsible for Microsoft’s business development.

Microsoft remains highly profitable and last month beat Wall Street’s quarterly profit and revenue forecasts.

But the company has come under criticism for missing some of the largest technology shifts in the past few years from Internet search to social networking, and Apple Inc and Google Inc are now at the vanguard of a mobile computing revolution that is eroding its core PC-based business.

Microsoft’s shares closed down 0.5 percent at $36.74 on Nasdaq.

Liquid Metal Printer Lays Electronic Circuits on Paper, Plastic and Even Cotton

A simple way to print circuits on a wide range of flexible substrates using an inkjet printer has eluded materials scientists. Until now.

One of the dreams of makers the world over is to be able to print electronic circuits on more or less any surface using a desktop printer. The great promise is the possibility of having RFID circuits printed on plastic or paper packaging, LED arrays on wallpaper and even transparent circuits on glass. Or simply to rapidly prototype circuits when designing new products.

There are no shortage of conducting inks that aim to do this but they all have drawbacks of various kinds. For example, many inks have low or difficult-to-control conductivity or need to be heated to temperatures of up to 400 degrees C after they have been printed thereby limiting the materials on which they can be printed. The result is that the ability to print circuits routinely on flexible materials such as paper or plastic has remained largely a dream.

Until now. Today, Jing Liu and pals at the Technical Institute of Physics and Chemistry in Beijing say they’ve worked out how to print electronic circuits on a wide range of materials using an inkjet printer filled with liquid metal.  And they’ve demonstrated the technique on paper, plastic, glass, rubber, cotton cloth and even an ordinary leaf.

The new technique is straightforward. The magic sauce is a liquid metal: an alloy of gallium and indium which is liquid at room temperature. They simply pump it through an inkjet printer to create a fine spray of liquid metal droplets that settle onto the substrate.

The droplets rapidly oxidise as the travel through the air and this oxide forms a surface layer on each drop that prevents further oxidisation. That’s handy because the liquid metal itself does not easily adhere to the substrates. But the metal oxides do and this is the reason, say Jing and co, that the circuits adhere so well to a wide range of surfaces.

They also say it’s relatively easy to create almsot any circuit pattern, either by moving the printer head over the substrate or by using a mask.  And they’ve demonstrated this by printing conducting circuits on cotton cloth, plastic, glass and paper as well as on a leaf.

That looks to be a useful development. The ability to print circuits in magazines or on t-shirts will surely attract much interest. And being able to test circuit designs by printing them out using a desktop printer will be invaluable to many makers.

Perhaps most exciting of all is that the technology behind all this is cheap and simple: there’s no reason why it couldn’t be pushed to market very rapidly. And that raises the prospect of being able to print prototype circuits in small businesses and even at home.

Could it be that liquid metal printers could bring about the same kind of revolution in home-built electronics that 3D printers triggered with material design? And might it be possible to combine them into a single machine that prints functional electronic devices?

 

Someday Your EV Charger May Be the Roadway Itself

A researcher envisions the ultimate cure for “range anxiety”: roadway-powered vehicles with modified on-board power receivers.

By Martin LaMonica on November 19, 2013

One way to extend the range of electric vehicles may be to provide power wirelessly through coils placed under the surface of a road. But charging moving vehicles with high-power wireless chargers below them is complex.

Researchers at North Carolina State University have developed a method to deliver power to moving vehicles using simple electronic components, rather than the expensive power electronics or complex sensors previously employed. The system uses a specialized receiver that induces a burst of power only when a vehicle passes over a wireless transmitter. Initial models indicate that placing charging coils in 10 percent of a roadway would extend the driving range of an EV from about 60 miles to 300 miles, says Srdjan Lukic, an assistant professor of electrical engineering at NCSU.

Wireless charging through magnetic induction—the same type typically used for electric toothbrushes—is being pursued by a number of companies for consumer electronics and electric vehicles (see “Wireless Charging—Has the Time Finally Arrived?”). Such chargers work by sending current through a coil, which produces a magnetic field. When a car with its own coil is placed above the transmitter, the magnetic field induces a flow of power that charges the batteries.

Stationary inductive chargers for electric vehicles typically use sensors to ensure that the receiver coils on the vehicle are aligned above wireless charging pads correctly. The NCSU researchers’ system operates without position sensors in an attempt to simplify the design and make it more efficient. When there are no vehicles, the transmitter coil gives off a weak field. But when a vehicle with a receiver passes by, electronics in the receiver trigger a strong magnetic field and an accompanying flow of power, says Lukic.

Precisely controlling when the roadway coils produce a magnetic field is important for safety reasons; if the field misses the car’s receiving coils, it could attach to parts of the car or attract stray objects. “Somehow we have to channel or contain the magnetic field produced by the transmitter to always be right below the receiver. We cannot just beam out a strong field into the environment,” he says. Some designs have a series of coils that are always energized, but that approach is not energy-efficient, Lukic says.

 

In a stationary induction charger, the power receiver is made with a simple coil. The NCSU device is more sophisticated. It uses capacitors and inductors to manipulate the power transfer and magnetic field, says Lukic. The coupling between transmitter and receiver could be done with power electronics, but such a system would be more expensive than the NCSU device, he says.

The researchers have made a low-power prototype and intend to reach a rate of 50 kilowatts, which is equivalent to direct-current fast chargers, which work more efficiently than conventional alternating-current chargers.

Commercial interest in wireless charging systems for moving vehicles is growing. Qualcomm is working on a “dynamic” charging system that builds off its current stationary wireless EV charger. The University of Utah has tested a wireless charging infrastructure for city buses and has spun out a company calledWireless Advanced Vehicle Electrification to build commercial products. With the Utah system a bus could charge from coils placed under the road surface where passengers load or at traffic lights. Dynamic wireless power transfer could also be used for robots.

The techniques that the NCSU researchers used for dynamic EV charging have already been applied in some consumer electronics, says Katie Hall, the chief technology officer of WiTricity, a company that makes wireless charging equipment. But the electronic tooling used for small electronics, such as switches, isn’t readily available for high-power applications. “That kind of technology doesn’t seamlessly scale to kilowatts or hundreds of kilowatts,” she says.

The Oak Ridge National Laboratory is also working on ways to automatically match the wireless power transmitter and receiver, says Omer Onar, a researcher who works on wireless vehicle charging there. The new work addresses only one of the barriers of the dynamic charging, he says: “Most of the [commercial] barriers are associated with cost and infrastructure.”

Folding Wings Will Make Boeing’s Next Airplane More Efficient

A more efficient engine and composite wings that fold up will reduce fuel consumption on Boeing’s 777x.

In 2020, Boeing says it will start deliveries of a new airplane, which is called 777x for now, that will be 12 percent more fuel efficient than its competition. That would bring huge savings to airlines in reduced fuel costs.

The plane is based, as the name suggests, on Boeing’s large 777 airliner. To get the fuel savings, Boeing is using the new GE9X engine from GE Aviation. It will also have composite wings that are longer than the ones on the current 777. Longer wings are known to improve efficiency, but pose a problem for negotiating airports. One solution is to add vertical winglets, which has much the same effect. With the 777x, Boeing has opted for longer wings that fold up when the plane is on the ground, shortening the wingspan by just over 6 meters.

Boeing has received 259 orders for the airplane.

Airplane design will change slowly because of the high need for reliability. But aircraft designers are working on new technologies that could eventualy cut fuel consumption in half (see “A More Efficient Jet Engine Is Made from Lighter Parts, Some 3-D Printed” and “’Hybrid Wing’ Uses Half the Fuel of a Standard Airplane”). Even greater benefits could come from radical engine designs and the use of batteries to augment them (see “Exploding Engine Could Reduce Fuel Consumption” and “Once a Joke, Battery-Powered Airplanes Are Nearing Reality”).

Internet Engineers Plan a Fully Encrypted Internet

Responding to reports of mass surveillance, engineers say they’ll make encryption standard in all Web traffic.

By David Talbot on November 18, 2013

In response to the public outcry over mass Internet surveillance by the National Security Agency (NSA), the engineers who develop the protocols that underpin the Internet are deep into an effort to encrypt all Web traffic, and expect to have a revamped system ready to roll out by the end of next year.

The effort, by the Internet Engineering Task Force, or IETF, an informal organization of engineers that changes Internet code and operates by rough consensus, involves HTTP, or hypertext transfer protocol, which governs information exchanges between the Web browser on your phone and computer and the servers that hold the data of the website you are visiting.

 

Leaked documents brought to light by former NSA contractor Edward Snowden suggest the NSA routinely harvests and stores huge amounts of information from major cloud computing platforms and wireless carriers. Today, much of the Web traffic between your device and Web server is not encrypted unless websites choose to use a variant of the HTTP protocol called HTTPS—which includes an encryption step, called transport layer security. This is commonly used by banks, e-commerce sites, and by some big sites, including Google and Facebook. (If a website’s address starts with “https://” it already uses encryption.)

The IETF change would introduce encryption by default for all Internet traffic. And the work to make this happen in the next generation of HTTP, called HTTP 2.0, is proceeding “very frantically,” says Stephen Farrell, a computer scientist at Trinity College in Dublin who is part of the project.

The hope is that a specification will be ready by the end of 2014. It would then be up to websites to actually adopt the technology, which is not mandatory.

Many experts have pointed out that mass Internet spying is done in part because it’s so easy to do. Some argue that making life a little harder for agencies like the NSA may make them focus on legitimate national security targets rather than scooping up everything and asking questions later (see “Bruce Schneier: NSA Spying Is Making Us Less Safe” and “NSA Leak Leaves Crypto Math Intact but Highlights Known Workarounds”).

“I think we can make a difference in the near team to have Web and e-mail encryption be ubiquitous,” Farrell says.

Indeed, an even nearer-term step the IETF is taking, he says, involves beefing up security in e-mail and instant message traffic—two key targets for dragnet surveillance. Right now, protocols exist to encrypt these communications as they make several hops: first from your device to your e-mail provider, then to the recipient’s e-mail provider, and finally to the recipient’s phone or computer.

The problem is that often the protocols needed for encryption are not set correctly and then don’t work between different e-mail servers, such as those of small organizations, or when they hop between big encrypted services like Gmail and that of a small company or institution.

When this happens, your e-mail winds up being sent “in the clear” because e-mail protocols elevate actual delivery over all other concerns, including whether or not the encryption actually was working. “I think we can do better on that,” Farrell says, to make the setup easier and verifiable.

In some ways this is an about-face, because a year and a half ago a group within the IETF had decided against adding encryption by default in HTTP. Part of what makes the task hard, Farrell says, involves the static portion of Web pages that are “cached,” or stored on local servers nearer to the user.

Caching is problematic because the cached content sits between the browser and the server, and it is typically kept “in the clear”—or unencrypted—so it can be identified. By its nature, encryption makes every piece of content appear unique. “The issue is, if you turn on the crypto, you make it harder to do that caching,” Farrell says. “And the technical challenge is, how do we get the security benefit and keep the caching benefit? That’s being worked on.”

A range of other potential technical avenues for tightening up Internet privacy was outlined in a recent blog by Tim Bray, who helped develop several key Web protocols and now works at Google. He attended an IETF meeting last week in Vancouver (see “Time for Internet Engineers to Fight Back Against the Surveillance Internet”).

Bray did not reply to an interview request but outlined the relevance of these efforts in his post. “At the end of the day this is a policy problem [and] not a technology problem; but to the extent that anything can be done at the technology level, a lot of the people who can do it are here,” he wrote, referring to the engineers and browser makers attending the IETF.

Indeed, Jari Arkko, the IETF chair and an expert on Internet architecture with Ericsson Research, says that nobody should harbor illusions about technical quick fixes. “I need to be honest and open—technology is only part of the issue here,” he says.

Genomics Technology Races to Save Newborns

A Kansas City hospital is pioneering genomic testing to solve life-threatening mysteries involving infants and kids with developmental  disorders.

By Susan Young on November 19, 2013

Earlier this month, doctors at Children’s Mercy Hospital in Kansas City were able to use rapid DNA sequencing and analysis to identify the genetic mutation keeping a baby girl from eating and growing.

The hospital team identified the cause of her problems—a genetic disorder that can be treated with intensive nutritional support and vitamins to stimulate her mitochondria, the powerhouses of cells—and ruled out other progressive and often fatal conditions. In other words, the genomic diagnosis helped shape her clinical care, pointing the way to the nutritional supplements the girl needed to improve and the best way to feed her.

 

The baby girl is one of two dozen critically sick infants whose genomes have been scrutinized using one of the fastest whole-genome analyses in the world. The hope is that such rapid genome analysis will help doctors better diagnose and then treat infants born with genetic disorders. Over the next five years, the Kansas City team of doctors and geneticists will analyze the genomes of hundreds of more babies born with serious disorders to evaluate the benefits of two-day genomic diagnoses to patients and their families.

The Children’s Mercy team is one of a few groups across the U.S. pioneering the use of genome sequencing in the care of children with puzzling conditions. Earlier this year, the hospital’s genomics center reported that it had developed a system to sequence and interpret a newborn’s genome in just 48 hours (see “Sick Babies Could Have Genomes Sequenced in Days”). The hospital has focused its rapid genome analyses on neonatal intensive care patients because a diagnosis could change the care of these infants at a critical time. “We can make more educated decisions,” says Sarah Soden, the medical director of the genome center. This could decrease the time a sick newborn has to spend in the stressful and expensive neonatal ICU.

The rapid diagnosis could also have lifelong benefits for newborns. In the case of the newborn girl who wasn’t eating, her muscles were so weak that she had trouble swallowing; she had to be fed through a tube. But once her condition was diagnosed, her doctors realized they could feed her a thickened formula, which will allow her to learn how to eat in a critical developmental window. “Kids who aren’t allowed to eat in the first months of life are really hard to later teach to eat,” says Soden.

Gene tests and whole-genome analyses often take weeks, but the Kansas City hospital has developed computational tools to more quickly identify the potentially medically relevant variations in a patient’s three billion base pairs. Whole-genome analyses, as opposed to targeted gene tests, can be especially beneficial for newborns because they may not yet show all the symptoms of a given condition. “The ability to cast a wide net and look at all relevant genes is very helpful for newborns who may not have fully presented with all of a disease’s classic features,” says Soden.

The analysis starts with a speedy 25-hour DNA sequencing process. The data is then analyzed by software developed by Children’s Mercy. The software first looks at genes known to be connected to symptoms exhibited by the infant. If none are found, the analysis is then expanded to all DNA variants known to potentially cause disease.

With a recent $5 million grant from the National Institutes of Health, the hospital will study the benefits and risks of using rapid genomic sequencing on severely ill newborns. The study will involve 1,000 infants; doctors will use the rapid sequencing for half of these newborns as part of their diagnostic workup.

Sequencing is still a relatively new medical testing tool, and this large study, along with three others underway at other NIH-funded centers, will determine how to best incorporate the technology into newborn care, or whether it should be incorporated at all, says Geoffrey Ginsburg, director of Genomic Medicine at Duke University’s Institute for Genome Sciences and Policy. These tests may help improve the accuracy, reduce the turnaround time, and lower the cost of such screening, says Ginsburg.

The Children’s Mercy team has already used its rapid sequencing analyses on two dozen patients. Soden and her colleagues say the results often help guide families and doctors. The rapid whole-genome analysis costs around $10,000 per sample. For less time-pressured cases where more is known about a child’s condition, Children’s Mercy also offers a lower cost and more targeted analysis. This screen focuses on 514 different genes that are each known to cause genetic disorders in young patients. That test, which takes a few weeks, can help families that have struggled with the mystery of undiagnosed and often debilitating conditions for years.