Why Mass Spectrometry Is Taking Over Affinity-based Approaches In Plasma Proteomics
Contributed Commentary by Lukas Reiter, Biognosys, and Daniel Lopez-Ferrer, Thermo Fisher Scientific
December 10, 2021 | Large-scale plasma proteomics studies have applications across all parts of medical and biological sciences, from clinical trials and basic research to, increasingly, the measurement of patient samples in hospitals. The demand for proteomics studies has increased in recent years, with growing emphasis on datasets capturing vast amounts of proteomics data from large cohorts of patients, with thousands of proteins measured in each person. In part, this has been led by developments in computational methods, like deep learning to analyze data, as well as advancements in sample analytical methodologies.
Proteomics datasets are utilized for identifying biomarkers of diseases in diagnostics and prognostics and are driving innovation in drug discovery and development. As such, scientists require methods that can efficiently deliver reproducible qualitative and quantitative results for large human cohorts. Traditional affinity-based assays are unable to keep up with the demand for large-scale proteomics studies, which is why laboratories are turning to mass spectrometry (MS)-based workflows to measure patient samples. Here, we describe how new advances in MS-based methods are increasing the reproducibility, specificity, and coverage of proteome profiling.
Moving Beyond Affinity-Based Assays
Affinity-based assays, like western-blots and immunoassays, have been the backbone of proteomics studies for decades. Often the first techniques that scientists learn in the laboratory, they are easy to perform, require minimal training, and produce rapid, meaningful readouts of protein abundance and interactions in simple studies. However, as demand for more complex and comprehensive proteomics data increases, affinity-based assays run into a series of issues that hold scientists back from a more detailed understanding, particularly in large-scale studies.
Blood plasma samples are commonly used for proteomics research as they are easy to access with minimal invasion to patients. However, the plasma proteome is highly complex and heterogenous, containing tens of thousands of proteins with a huge dynamic range. Affinity-based proteomics analysis of plasma samples is very labor-intensive, making high-throughput analyses of samples challenging. There are also inherent issues with using antibodies that can reduce the reliability of results. Antibodies lack specificity, which can vary due to multiple factors like ionic strength, temperature, and protein-protein interactions. This lack of specificity can introduce high variability into results leading to irreproducibility in large cohorts of immunoassay data and means you cannot reliably compare datasets. Additionally, these techniques only measure the proteins included in a pre-developed panel, introducing bias into the results.
Mass Spectrometry Reveals The Complexity Of The Plasma Proteome
MS-based methods have greatly expanded the scope of proteomics analysis and furthered the ability of scientists to answer new research and clinical questions. Unlike affinity-based assays, MS provides unbiased analysis of samples, meaning you can screen the entire proteome of a sample, rather than only looking at specific proteins which are already known to be of interest. MS is peptide-based, meaning it can distinguish between protein isoforms, detect post-translational modifications (PTMs), and even provide structural information when using techniques like limited proteolysis-coupled MS. The peptide-based analysis approach also means that MS can overcome issues that binder-based assays have with specificity. For example, MS can still detect proteins with epitopes that would be inaccessible to antibodies due to conformational changes or molecular interactions. Techniques like parallel reaction monitoring (PRM) MS can be easily added to MS-based discovery workflows to perform targeted quantified proteomics. PRM allows clear and transferable detection of analytes, which is particularly important for clinical analyses of patient samples. Finally, MS-based methods allow high-throughput data acquisition in large-scale studies, allowing scientists to achieve wide proteome coverage, studying thousands of proteins in individual samples, and replicating the methods across thousands of samples.
Overcoming The Challenges of Mass Spectrometry
MS-methods are not without their issues, however. Barriers to using these systems for proteomics analysis often center around the perceived complexity of multi-step workflows, and perception that these methods require high levels of experience to set-up experiments and analyze data. Some may have started using MS-based methods but encountered issues that impede the high-quality datasets these methods are capable of obtaining.
Users may not have found the best workflow to reliably analyze low abundance proteins, or their sample preparation methods may not be optimized. For example, data-dependent acquisition (DDA) is the most popular acquisition method for basic research or early-stage drug discovery with small study sizes, but DDA settings can be complex to establish, meaning there can be errors in method development and application. Moreover, DDA relies on setting rules to analyze a defined list of peptides in a sample, which can lead to missing data that complicates statistical analysis of datasets.
Improvements To Mass Spectrometry Workflows
Recently, new MS-based methods have been developed to overcome these perceived barriers for proteomics analyses, with the hope to accelerate, simplify, and expand these methodologies, and accelerate proteomics research. For instance, modern MS systems can have pre-selectable workflow options that automate methods to quantitatively measure proteins for different sample types. This greatly simplifies MS-based workflows, reducing the training requirements and allowing scientists to focus their attention on the analysis of their datasets. Additionally, the automation of sample preparation steps can greatly improve the reproducibility and efficiency of workflows.
The use of different analytical methods can also enhance proteomics workflows, such as data-independent acquisition (DIA), which is now widely recognized as the method of choice for large-scale studies due to its high reproducibility and ease of scalability. DIA is time efficient and generates comprehensive amounts of quantitative data, meaning few gaps in the matrix. In DIA, the quadrupole continuously cycles across the entire mass range, providing complex spectral results for each sample. Along with increasing the depth of proteome analysis, DIA also increases consistency across samples, standardizing data and making experiments more comparable. Moreover, DIA greatly improves the throughput of large-scale studies involving thousands of samples. For example, a recent study analyzed >1,500 plasma samples using high-throughput MS and DIA at a rate of over 30 samples per day per instrument.
Technologies like high-field asymmetric ion mobility spectrometry (FAIMS) can improve the capability of MS to identify peptides. FAIMS takes place between chromatography and MS, increasing the signal-to-noise ratio of analyses by adding in a selectivity step to reduce matrix background. By providing orthogonal selectivity, FAIMS helps reduce the need for larger chromatographic gradients, shortening workflow times, and can reduce sample preparation steps. Overall, FAIMS speeds up workflows, and increases both analysis quality and sample throughput for MS methods.
For plasma proteomics, improvements in depletion workflows have had a dramatic impact on performance. Using plasma depletion, in which the 10-15 most abundant proteins are removed from the analysis, increases the dynamic range and allows scientists to detect additional proteins. In a recent study with 180 pan-cancer human plasma samples, automated sample preparation, plasma depletion, FAIMS and DIA were combined. Using these techniques, the number of proteins detected increased from 500 to over 2000 in a given sample. Across the entire study cohort over 2700 proteins were quantified. This workflow has been further optimized to quantify over 3000 proteins in large-scale studies.
Mass Spectrometry-Based Methods Are Opening New Doors For Plasma Proteomics
Overall, advances in MS-based workflows are pushing the boundaries of large-scale plasma proteomics. They offer high reproducibility, sensitivity for even low-concentration proteins and an increased throughput. New developments in MS-based methods are greatly increasing the scope of proteomics, not just returning datasets that reveal the complexity of proteomes, but also creating streamlined and efficient workflows ideal for large-cohort studies. It is no surprise then, that MS is becoming a leading method to identify disease biomarkers and accelerate our understanding of biological systems.
Daniel Lopez-Ferrer, Senior Manager, Proteomics Marketing, Thermo Fisher Scientific, and his team are focused on the identification and development of new proteomics analytical tools and applications that bring broad benefits to the biosciences community. He has held positions as a Senior Scientist at Caprion Proteomics (CA, USA) and Pacific Northwest National Laboratory (WA, USA) developing technologies for high-throughput, large-scale proteomics projects. Dani has over 35 peer review papers and several patents. He can be reached at daniel.lopezferrer@thermofisher.com.
Lukas Reiter graduated from ETH Zurich in molecular biology. For his PhD, he joined the groups of geneticist Michael Hengartner and proteomics pioneer Ruedi Aebersold and received his degree in 2009 from the University of Zurich. After that Lukas joined Biognosys in 2010 as one if its first employees. As CTO, he oversees research as well as product and workflow development at Biognosys. Lukas is fascinated by proteomics technology and the idea of making it available to everybody who needs to know about proteins. He can be reached at lukas.reiter@biognosys.com.