Precision Medicine: Setting a Strong Foundation to Accelerate Progress
Contributed Commentary By Josh Gluck
December 11, 2019 | Precision medicine depends on data—more specifically, on our ability to analyze and act on it from the bench to the bedside and back again. And, we’ve long understood that our ability to direct data—both structured and unstructured—when and where it’s most needed is foundational to advancing and unlocking the promise of precision medicine.
It has been a long journey, but we are at long last seeing measurable gains and progress in the delivery of precision care. For instance, it’s becoming the norm to analyze tumors for known gene mutations or expressions to select treatments likely to be most effective. There is still much work to be done, and we see artificial intelligence, and other forms of advanced machine learning, poised to play an increasingly important role in the evolution and delivery of precision medicine.
To expand adoption of AI and unleash its power to propel precision medicine to new heights, we must focus on improving our ability to democratize data across the entire healthcare and life sciences ecosystem.
Increasingly, we find that as organizations progress on their journey toward precision medicine and mainstreaming AI to support it their infrastructures are not built for the mission. They quickly find that what might have worked yesterday may not (and likely will not) work tomorrow, starting with siloed and outdated infrastructures. Information needed to power AI and advance precision medicine from development to deliver cannot flow freely within and between organizations on both sides of the healthcare and life sciences ecosystem—from the lab to therapies.
To truly democratize data with the goal of leveraging AI to advance precision medicine, life sciences and healthcare organizations need to start with a new approach to data. We think of it as a modern data experience built on a data-centric architecture that focuses on data and places it at the center of everything. A strategy that leverages a data-centric architecture, which is designed to consolidate islands and silos of data infrastructure, and ultimately simplify the data foundation, is defined by five key attributes:
- Real-time. It supports the capability to find the right insight at the right time to drive improved clinical outcomes
- On-Demand and Self-Driving. It prioritizes automation at its core and leverages machine learning to provide high levels of availability and proactive support. A data-centric architecture should be easy to provision and evolve with your needs.
- Exceptionally reliable and secure. This is a must—especially when it comes to critical patient and clinical trial participant data and protected health information.
- Support for multi-cloud environments. It should easily allow storage volumes to be moved to and from the cloud, and between cloud providers, making application and data migration simple, and enabling hybrid use cases for application development, deployment, and protection. Many life sciences and healthcare organizations have become comfortable with the concept of private cloud either within their datacenter or through remote hosting agreements. Some are moving workloads to the public cloud. A data-centric architecture should support the flexibility to take advantage of the cloud when and how an organization chooses.
- Constantly evolving and improving. Users should expect that their IT infrastructure continuously gets better, without downtime, delivering more value every year for the same or lower cost. Life sciences organizations should expect the same for their storage infrastructure. They must architect for constant improvement so that storage services can be seamlessly improved, without ever bringing applications or users offline.
The data-centric architecture requires a new type of data hub, one that allows organizations to consolidate multiple research and development systems as well as clinical applications on a single storage platform to unify and share data across the applications that need them for better insight. It must be intended to securely share and deliver data within an organization—and increasingly between healthcare and life sciences organizations—so all stakeholders can benefit from the insights within their data, not merely as a cold-data repository where those insights remain just beyond reach.
Since most organizations are looking to preserve existing infrastructure investment and reduce risk, the hub should allow organizations to share their data across data teams and applications—taking the key strengths of each silo and the unique features that make them capable for their own tasks, and integrating them into a single unified platform.
A data hub must have four qualities, which are essential to unifying data: high throughput, supporting both file and object at the same time; native scale-out, supporting growth and demand as needed; multi-dimensional performance, without the need to pretune with forced adjustments along the way; and native support for massively parallel architectures that mimic the structure of GPUs to deliver performance to tens of thousands of cores accessing billions of objects.
In life sciences, like other industries, the ability to support data as objects is increasingly important as next-generation engineers are coding in a cloud world, where objects enable greater simplicity and flexibility. A data hub must enable high-performance local object storage so that organizations can move between public and private clouds on-premises without compromising performance.
A data-centric architecture—powered by a data hub that possesses all four of the necessary qualities—is integral for life sciences and healthcare organizations that are looking to advance personalized medicine and optimize the use of AI-enabled technology in this quest. This architecture ensures that the data at the heart of this opportunity is truly democratized and can be applied where and when it is needed—from the bench to the bedside and back again.
Josh Gluck is Vice President of Global Healthcare Technology Strategy at Pure Storage where he is responsible for Pure's healthcare solutions technology strategy, market development and thought leadership in healthcare. He is also is an Adjunct Assistant Professor of Health Policy & Management of NYU’s Robert F. Wagner Graduate School of Public Service. He can be reached at jgluck@purestorage.com.