Genome project to require Google-like computing power

An ambitious project with the goal of producing a more detailed understanding of the link between genetic variations and susceptibility to disease will require an unprecedented amount of computing power and terabytes of data storage, according to the leaders of the project.

The 1,000 Genomes Project, announced earlier this week by an international consortium that includes the National Human Genome Research Institute, part of the National Institutes of Health, plans to examine over a three-year period the human genome at a level of detail never before accomplished.

The project "will greatly expand and further accelerate efforts to find more of the genetic factors involved in human health and disease," said Richard Durbin, deputy director of the Wellcome Trust Sanger Institute in Cambridge, England.

Francis Collins, director of the research institute, said the project will lead to a fivefold increase in the sensitivity of disease discovery efforts across the human genome.

Any two humans are more than 99 percent similar at the genetic level, but the fractional differences can help determine susceptibility to disease and how the body will respond to drugs. The goal of the project is to produce a catalog of variants that are present at 1 percent or greater frequency in the human population across most of the genome. That requires the project to sequence the genes of at least 1,000 people.

The project plans to sequence 8.2 billion DNA base pairs a day -- or the equivalent of more than two human genomes every 24 hours -- during its two-year production phase, for a total of 6 trillion DNA bases, said Gil McVean, co-chair of the analysis committee and professor of mathematical genetics at the University of Oxford.

Managing this massive amount of data will require novel computational methods. Gonçalo Abecasis, a professor of applied statistics and a geneticist who works at the Center for Statistical Genetics at the University of Michigan, said the data produced by the genome project will be so immense that the only process that he can think of that is similar in scope is the search engine Google, which manages billions of Web searches daily.

If the project had to start crunching all the sequence data today, Abecasis estimated it would take a supercomputer with 10,000 massively parallel processors. But, he said, the project is working to develop algorithms and mathematical and computational models that should reduce the computing requirements.

Because the genomes of most people are mostly similar, Abecasis said he is working on models and algorithms designed to process and crunch the fractional differences, much like the way video compression algorithms function when processing power is applied to objects that move and not to static background objects.

The models still are being developed, but the project will require supercomputers to manipulate the data but need far fewer than 10,000 processors, Abecasis said.

The Beijing Genomics Institute in Shenzhen, China, is the other key research organization participating in the project and will perform sequencing along with the Wellcome Trust Sanger Institute and its large-scale sequencing network. That network includes the Broad Institute of MIT and Harvard, the Washington University Genome Sequencing Center at the Washington University School of Medicine in St. Louis, and the Human Genome Sequencing Center at the Baylor College of Medicine in Houston.

Stay up-to-date with federal news alerts and analysis — Sign up for GovExec's email newsletters.
FROM OUR SPONSORS
JOIN THE DISCUSSION
Close [ x ] More from GovExec
 
 

Thank you for subscribing to newsletters from GovExec.com.
We think these reports might interest you:

  • Sponsored by G Suite

    Cross-Agency Teamwork, Anytime and Anywhere

    Dan McCrae, director of IT service delivery division, National Oceanic and Atmospheric Administration (NOAA)

    Download
  • Data-Centric Security vs. Database-Level Security

    Database-level encryption had its origins in the 1990s and early 2000s in response to very basic risks which largely revolved around the theft of servers, backup tapes and other physical-layer assets. As noted in Verizon’s 2014, Data Breach Investigations Report (DBIR)1, threats today are far more advanced and dangerous.

    Download
  • Federal IT Applications: Assessing Government's Core Drivers

    In order to better understand the current state of external and internal-facing agency workplace applications, Government Business Council (GBC) and Riverbed undertook an in-depth research study of federal employees. Overall, survey findings indicate that federal IT applications still face a gamut of challenges with regard to quality, reliability, and performance management.

    Download
  • PIV- I And Multifactor Authentication: The Best Defense for Federal Government Contractors

    This white paper explores NIST SP 800-171 and why compliance is critical to federal government contractors, especially those that work with the Department of Defense, as well as how leveraging PIV-I credentialing with multifactor authentication can be used as a defense against cyberattacks

    Download
  • Toward A More Innovative Government

    This research study aims to understand how state and local leaders regard their agency’s innovation efforts and what they are doing to overcome the challenges they face in successfully implementing these efforts.

    Download
  • From Volume to Value: UK’s NHS Digital Provides U.S. Healthcare Agencies A Roadmap For Value-Based Payment Models

    The U.S. healthcare industry is rapidly moving away from traditional fee-for-service models and towards value-based purchasing that reimburses physicians for quality of care in place of frequency of care.

    Download
  • GBC Flash Poll: Is Your Agency Safe?

    Federal leaders weigh in on the state of information security

    Download

When you download a report, your information may be shared with the underwriters of that document.