2023
DOI: 10.1021/acs.analchem.3c00896
|View full text |Cite
|
Sign up to set email alerts
|

Cumulative Neutral Loss Model for Fragment Deconvolution in Electrospray Ionization High-Resolution Mass Spectrometry Data

Denice van Herwerden,
Jake W. O’Brien,
Sascha Lege
et al.

Abstract: Clean high-resolution mass spectra (HRMS) are essential to a successful structural elucidation of an unknown feature during nontarget analysis (NTA) workflows. This is a crucial step, particularly for the spectra generated during dataindependent acquisition or during direct infusion experiments. The most commonly available tools only take advantage of the time domain for spectral cleanup. Here, we present an algorithm that combines the time domain and mass domain information to perform spectral deconvolution. … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
5
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2
1

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(5 citation statements)
references
References 40 publications
0
5
0
Order By: Relevance
“…Low-quality MS 2 with unreliable fragment intensities or chimeric peaks (due to missingness or artifacts) might be attributed to instrumental noise (overshading low-abundant analytes) and/or co-eluting ions (due to sample complexity) . Emerging solutions have been proposed, including a Bayesian approach that computes cumulative neutral losses to clean up DIA spectra post hoc with or without the time domain for fragment deconvolution …”
Section: Hrms: Experimental Techniques and Workflowmentioning
confidence: 99%
See 1 more Smart Citation
“…Low-quality MS 2 with unreliable fragment intensities or chimeric peaks (due to missingness or artifacts) might be attributed to instrumental noise (overshading low-abundant analytes) and/or co-eluting ions (due to sample complexity) . Emerging solutions have been proposed, including a Bayesian approach that computes cumulative neutral losses to clean up DIA spectra post hoc with or without the time domain for fragment deconvolution …”
Section: Hrms: Experimental Techniques and Workflowmentioning
confidence: 99%
“… 216 Emerging solutions have been proposed, including a Bayesian approach that computes cumulative neutral losses to clean up DIA spectra post hoc with or without the time domain for fragment deconvolution. 217 …”
Section: Hrms: Experimental Techniques and Workflowmentioning
confidence: 99%
“…This typically results in very large and complex datasets (e.g. 5 GB per sample) that must be pre-processed prior to the identification workflow [31][32][33] . The NTA data processing workflows include several steps from data conversion to library search and the confidence assessment of the candidate spectra 2,23,[26][27][28][29] .…”
Section: Introductionmentioning
confidence: 99%
“…Non-targeted analysis (NTA) combined with liquid chromatography–high-resolution mass spectrometry (LC–HRMS) is considered to be one of the most comprehensive methods for the detection and identification of known and unknown unknowns in complex environmental and biological samples. , This approach utilizes a generic and wide-scope strategy for the sample preparation and analysis to maximize the coverage of the chemical space of the sample. ,,, This typically results in very large and complex data sets (e.g., 5 GB per sample) that must be preprocessed prior to the identification workflow. NTA data processing workflows include several steps, from data conversion to library searches and the confidence assessment of the candidate spectra. ,, Because of the complexity of such data sets and the sheer size of the chemical databases, NTA workflows are not very sensitive and, thus, do not result in a high percentage of identified chromatographic features. , A more sensitive but less comprehensive data processing alternative is suspect screening, in which the chemicals of interest are known prior to the data processing workflow. This approach is more sensitive in terms of the limits of detection, but it is unable to detect unknown unknowns. ,, These two strategies are commonly employed together for the screening of complex environmental and biological samples …”
Section: Introductionmentioning
confidence: 99%
“… 2 , 13 , 21 , 23 31 This typically results in very large and complex data sets (e.g., 5 GB per sample) that must be preprocessed prior to the identification workflow. 31 33 NTA data processing workflows include several steps, from data conversion to library searches and the confidence assessment of the candidate spectra. 2 , 23 , 26 29 Because of the complexity of such data sets and the sheer size of the chemical databases, NTA workflows are not very sensitive and, thus, do not result in a high percentage of identified chromatographic features.…”
Section: Introductionmentioning
confidence: 99%