2013
DOI: 10.1016/j.ins.2010.12.013
|View full text |Cite
|
Sign up to set email alerts
|

Comparison of metaheuristic strategies for peakbin selection in proteomic mass spectrometry data

Abstract: Mass spectrometry (MS) data provide a promising strategy for biomarker discovery. For this purpose, the detection of relevant peakbins in MS data is currently under intense research. Data from mass spectrometry are challenging to analyze because of their high dimensionality and the generally low number of samples available. To tackle this problem, the scientific community is becoming increasingly interested in applying feature subset selection techniques based on specialized machine learning algorithms. In thi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2014
2014
2023
2023

Publication Types

Select...
8
1

Relationship

1
8

Authors

Journals

citations
Cited by 14 publications
(5 citation statements)
references
References 57 publications
0
5
0
Order By: Relevance
“…Miguel et al [28] developed a hybrid metaheuristic based on variable neighborhood search (VNS) and tabu search (TS) for feature selection in classification. Garcia et al [29] compared the performance of the metaheuristic strategies (best first, GA, scatter search and VNS) for feature selection in proteomic mass spectrometry data.…”
Section: Figure 1 Stages Of Semiconductor Manufacturingmentioning
confidence: 99%
“…Miguel et al [28] developed a hybrid metaheuristic based on variable neighborhood search (VNS) and tabu search (TS) for feature selection in classification. Garcia et al [29] compared the performance of the metaheuristic strategies (best first, GA, scatter search and VNS) for feature selection in proteomic mass spectrometry data.…”
Section: Figure 1 Stages Of Semiconductor Manufacturingmentioning
confidence: 99%
“…In many real-world applications such as bioinformatics (Armananzas et al 2011;García-Torres et al 2013;Akand, Bain, andTemple 2010), medicine (da Silva et al 2011), text mining (Azam and Yao 2012;Feng et al 2012;Meng, Lin, and Yu 2011;Pinheiro et al 2012;Uguz 2011;Imani, Keyvanpour, and Azmi 2013), image processing (Jia et al 2013;Rashedi, Nezamabadi-pour, and Saryazdi 2013;Vignolo, Milone, and Scharcanski 2013) remote sensing (Ghosh, Datta, and Ghosh et al 2013;Guo et al 2008;Li et al 2011) and other domains (Pérez-Benitez and Padovese 2011; Wu et al 2010;Zhang et al 2011;Waad, Ghazi, and Mohamed 2013), the dimensionality of data are so high that they may lead to the breakdown of an ordinary feature selection algorithm. High dimensionality of the data asks for the development of more complicated methods to apply feature selection on a large number of features.…”
Section: Related Workmentioning
confidence: 99%
“…Furthermore, the number of possible feature subsets grows exponentially with the number of features and many problems related to feature selection have been shown to be N P − hard [6]. For all these reasons, finding the optimal subset is usually intractable [35] even for a moderate number of features d. Therefore, approximate algorithms are typically applied since they provide satisfactory solutions in a reasonable time (see, for example, [20,26,41]). Even if the obtained solution is suboptimal and there is no guarantee of the distance between such solution and the optimal one, in general, they provide satisfactory solutions in a reasonable computational time.…”
Section: Introductionmentioning
confidence: 99%