2007
DOI: 10.1109/icact.2007.358249
|View full text |Cite
|
Sign up to set email alerts
|

Automatic Modulation Recognition using Support Vector Machine in Software Radio Applications

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
8
0

Year Published

2012
2012
2024
2024

Publication Types

Select...
7
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 30 publications
(8 citation statements)
references
References 3 publications
0
8
0
Order By: Relevance
“…On one hand, different kinds of features are utilized for modulation classification, such as statistical features [10], high-order cumulants [11], [12] and wavelet cyclic features [13]. On the other hand, some well-known classifiers are employed in the AMC system, including support vector machine (SVM) [14], neural networks (NN) [15], k-nearest neighbors (KNN) [16], and decision tree [17]. With any strategy adopted, the design of these methods typically rely on handcrafted feature engineering which is determined based on the specific case and professional domain knowledge.…”
Section: A Automatic Modulation Classificationmentioning
confidence: 99%
“…On one hand, different kinds of features are utilized for modulation classification, such as statistical features [10], high-order cumulants [11], [12] and wavelet cyclic features [13]. On the other hand, some well-known classifiers are employed in the AMC system, including support vector machine (SVM) [14], neural networks (NN) [15], k-nearest neighbors (KNN) [16], and decision tree [17]. With any strategy adopted, the design of these methods typically rely on handcrafted feature engineering which is determined based on the specific case and professional domain knowledge.…”
Section: A Automatic Modulation Classificationmentioning
confidence: 99%
“…Since SVM is basically a binary classifier, WET it is not straightforward to apply it to multi-class classification problems. The most typical method for the multiclass problem is to classify one class from the other classes (refer 1-v-r), another typical method is to combine all possible two-class (pair wise) classifiers (refer 1-v-I) [10]. It's known as 1-v-I type SVM is superior to 1-v-r with respect to its learning time, but execution time for classification of 1-v-I is much worse than 1-v-r [16].…”
Section: Svmmentioning
confidence: 99%
“…In [10], a total of 7 statistical signal features were extracted and used to classify 9 modulation signals. The authors investigated the performance of the two types of SVM classifiers.…”
Section: Introductionmentioning
confidence: 99%
“…In recent years, as machine learning (ML) techniques continually demonstrate significant breakthroughs in various fields, many researchers also exploit ML techniques as classifiers with extracted features, such as support vector machine, K-nearest neighbor, and genetic programming [24]- [26]. Without extracting hand-engineered features, deep learning (DL) can automatically learn the high-level features.…”
Section: Introductionmentioning
confidence: 99%