A single set of data points (clinical, demographic, genomic, molecular) about a patient or population of patients will, alone, not provide the insight we need to address and cure disease. However, with significant advances in scientific and biotechnology, combined with big data and analytics we have the opportunity to develop and apply a more personalized, and therefore a more successful approach to preventing and treating life threatening diseases like cancer, cystic fibrosis, tuberculosis, multiple sclerosis, cardiovascular disease, HIV/AIDS and more.
For example, groundbreaking work is underway at Vanderbilt University School of Medicine to identify the genetic basis of diseases and drug response. This work will help physicians better tailor care and potentially develop new therapies for disease prevention. But uncovering patterns hidden deep in medical records and DNA databanks can be challenging.
Using powerful big data and analytics capabilities as part of IBM Watson Foundations, Vanderbilt University School of Medicine clinicians cut research timelines from nearly a year to a few weeks to help accelerate the pace of discovery and, ultimately, improve patient health. Researchers and clinicians are working together to learn which patients may be at risk for certain diseases and why some patients respond well to certain drugs.
For Vanderbilt University School of Medicine, it was critical to have a big data and analytics solution that would enable testing and re-testing of theories at the speed of thought. “If everything you want to test takes a week to test, you’re much less likely to try something that you don’t think will work," Dr. Joshua Denny, associate professor of biomedical informatics at Vanderbilt University School of Medicine, shared, "but sometimes the ideas that have a lower likelihood of success yield very interesting outcomes.” Read the full case study to learn more.
From a drug development perspective, dwindling pipelines, large investments and costly failures in Phase I clinical trials have led pharmaceutical manufacturers to apply a more targeted approach to personalized treatments based on biomarker background. Biomarkers are biological measurements that can be used to predict risk of disease, to enable early detection of disease, to improve treatment selection and to monitor the outcome of therapeutic interventions.
This too, requires big data and analytics capabilities. Fortunately, researchers and scientists have begun benefitting from tranSMART, a community development based on an open source platform. TranSMART is an environment for life scientists and bioinformaticians to store patients’ omics data (genomics, transcriptomics, proteomics and so on) and correlate them with phenotypic data of clinical relevance (disease subtypes, drug response, survival expectancy and so on). IBM has optimized tranSMART to take advantage of PureData System for Analytics (Netezza) to accelerate queries and analytics.
Collaborating with ConvergeHEALTH by Deloitte, IBM is helping academic medical research centers and pharmaceutical companies process immense data sets and build complex analytical models of thousands of genes and genomes for better biomarker discovery, faster patient stratification and deeper comparative effectiveness. See it in action at the upcoming BioIT World taking place in Boston, MA from April 29–May 1, 2014.
A new era is upon us. Third-generation sequencing technologies, such as on-chip sequencing, are exponentially increasing the volume and complexity of genomic data that needs to be analyzed and stored. Furthermore, not only must researchers process immense data sets, they are also building complex analytical models for genome wide comparisons, combinatorial genetics, pharmacogenomics and metagenomics. And as genomic information is used increasingly as a basis for personalized medicine and the analysis of drug effects on populations, the need for big data and analytics capabilities will only increase in importance.