2018
DOI: 10.1007/978-3-319-97785-0_3
|View full text |Cite
|
Sign up to set email alerts
|

Rotationally Invariant Bark Recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
4
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
3
1
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(4 citation statements)
references
References 15 publications
0
4
0
Order By: Relevance
“…The first row of Table 2 (NoDA), reports performance obtained by a ResNet50 without data augmentation. The last row of Table 2 (State of the art) reports the best performance reported in the literature on each of the three data sets using the same testing protocol used in this paper on the three data sets: the best for VIR is [33], for BARK [34], and for GRAV [12]). In [33], the best performance on VIR, features were extracted from the deeper layers of three pretrained CNNs (Densenet201, ResNet50, and GoogleNet), transformed into a deep co-occurrence representation [35], and trained on separate SVMs that were finally fused by sum rule.…”
Section: Preprints (Wwwpreprintsorg) | Not Peer-reviewed | Posted: 2 November 2021mentioning
confidence: 99%
See 1 more Smart Citation
“…The first row of Table 2 (NoDA), reports performance obtained by a ResNet50 without data augmentation. The last row of Table 2 (State of the art) reports the best performance reported in the literature on each of the three data sets using the same testing protocol used in this paper on the three data sets: the best for VIR is [33], for BARK [34], and for GRAV [12]). In [33], the best performance on VIR, features were extracted from the deeper layers of three pretrained CNNs (Densenet201, ResNet50, and GoogleNet), transformed into a deep co-occurrence representation [35], and trained on separate SVMs that were finally fused by sum rule.…”
Section: Preprints (Wwwpreprintsorg) | Not Peer-reviewed | Posted: 2 November 2021mentioning
confidence: 99%
“…Since the deeper layers of a CNN produce high-dimensional features, dimensionality reduction was performed using DCT [36]. In [34], the best performance on the Bark data set, a method based on 2D spiral Markovian texture features (2DSCAR) via multivariate Gaussian distribution was trained on a 1-NN with Jeffery's divergence as the distance measure. In [34], the best performance on GRAV, several ensembles were built from extracted views using a set of basic classifiers that included an SVM and two merge-view models proposed in [37].…”
Section: Preprints (Wwwpreprintsorg) | Not Peer-reviewed | Posted: 2 November 2021mentioning
confidence: 99%
“…The first row of Table 2 (NoDA), reports performance obtained by a ResNet50 without data augmentation. The last row of Table 2 (State of the art) reports the best performance reported in the literature on each of the data sets: VIR [46], BARK [47], GRAV [17], and POR [15]). In [46], which reports the best performance on VIR, features were extracted from the deeper layers of three pretrained CNNs (Densenet201, ResNet50, and GoogleNet), transformed into a deep co-occurrence representation [48] and trained on separate SVMs that were finally fused by sum rule.…”
mentioning
confidence: 99%
“…As the deeper layers of a CNN produce high-dimensional features, dimensionality reduction was performed using DCT [49]. In [47], which obtains the best performance on the BARK data set, a method based on 2D spiral Markovian texture features (2DSCAR) via multivariate Gaussian distribution was trained on a 1-NN with Jeffery's divergence as the distance measure. In [47], which provides the best performance on GRAV, several ensembles were built from extracted views using a set of basic classifiers that included an SVM and two merge-view models proposed in [50].…”
mentioning
confidence: 99%