News Release

Different microarray systems more alike than previously thought

Peer-Reviewed Publication

Johns Hopkins Medicine

A multicenter comparison of equipment that can analyze the expression of thousands of genes at once to create a genetic "fingerprint," suggests these different microarray technologies are more alike than once thought.

Published in the May 2005 issue of Nature Methods, the study provides new hope that the mounds of information generated by these systems might actually be comparable, even though many different systems are used by many different laboratories.

"Biologists occasionally ask me what is the best platform on which to run microarray experiments, but there hasn't been a good answer to that question," said Rafael Irizarry, Ph.D., associate professor of biostatistics at the Johns Hopkins Bloomberg School of Public Health. "Now we see the possibility that it may not make much difference as long as the systems are used properly and the results reported properly."

The impact of such a finding is not insignificant, experts say, because these systems may ultimately allow scientists to uncover critical differences between, say, healthy and cancerous tissue. Unless laboratories can rely on each others' results, such findings would not have much value.

For the study, 10 laboratories in the Washington, D.C./Baltimore area were given identical samples of RNA -- DNA's cousin -- to analyze using a microarray platform of their choice. Five laboratories used commercially available Affymetrix microarrays, and five labs used either of two more flexible systems.

The Hopkins researchers noted that in the scientific literature some comparisons of different microarray systems found compatibility and some did not, and no study explored the possibility that a laboratory's methods of using a particular system affected the results.

"Given the amount of tinkering that occurs in some labs, my intuition was that different labs might obtain very different answers even when using the same platform," said Irizarry. "But we show that platforms can agree rather well, and that there is a large lab effect that accounts for why others thought comparisons revealed inconsistencies."

"This is good news for many people," said Forrest Spencer, Ph.D., associate professor at Johns Hopkins' McKusick-Nathans Institute of Genetic Medicine. "Given appropriate and careful analysis, it should be possible to compare historical data to future experiments, even if the technology changes or people make different choices in their platform."

When the study began roughly two years ago, microarray platforms were an exciting technology in the medical and biological sciences, but comparisons of results from different platforms had proved inconsistent. Previous studies indicated that Affymetrix GeneChips, for example, the commercially available and most expensive of the three platforms tested, provided different results from other platforms.

Affymetrix GeneChips provided highly reproducible, or consistent results, but for a laboratory on a budget, more cost-effective in-house systems were attractive.

"Technical choice of microarray platforms is a key issue for people who want to look at changes in gene expression. It is important for both cost and quality of data," she said. "But people are in need of clear information in making the choice."

For the study, five expert labs used Affymetrix GeneChips, three used "homemade" versions called two-color spotted cDNA arrays, and two used another commercial platform called two-color long oligonucleotide arrays. In general, the study showed Affymetrix GeneChips generated more reproducible data than the other microarrays.

But the researchers also showed that any of the platforms can be quite good -- or bad. The overall best performance came from one of the labs using a two-color long oligonucleotide array, with better statistical results in accuracy and precision than the remaining nine labs. However, the other lab using the two-color long oligonucleotide array had the worst overall performance. But with the exception of that one lab, the results among platforms were in general agreement and the quality of expensive and inexpensive platforms was similar across the board.

The researchers also note that although the technologies show general agreement, the results are far from identical. Additional analyses need to be done to determine whether seemingly minor remaining differences between platforms show a pattern that could affect a researchers' choice in consideration of cost versus performance in a given experiment. Additional studies can also include more recently introduced versions of microarray platforms.

The research was funded by the National Institutes of Health Specialized Centers of Clinically Oriented Research (SCCOR), the National Institute of Diabetes and Digestive and Kidney Diseases, and by donations of time and reagents from participating labs and core facilities.

Authors of the report are Rafael Irizarry of the Johns Hopkins Bloomberg School of Public Health; and Forrest Spencer, Daniel Warren, and Irene Kim of the Johns Hopkins School of Medicine.

###


Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.