2001
DOI: 10.1016/s0003-682x(01)00009-3
|View full text |Cite
|
Sign up to set email alerts
|

Application of time domain signal coding and artificial neural networks to passive acoustical identification of animals

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
43
0

Year Published

2008
2008
2018
2018

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 60 publications
(43 citation statements)
references
References 10 publications
0
43
0
Order By: Relevance
“…The sounds can be recorded with a microphone and can be analyzed in the time domain (Chesmore, 2001) or in the frequency domain (Potamitis et al, 2006). Chesmore (2001) reported a correct classification rate of 99.4% under low noise conditions (experiment with 25 British Orthoptera Species). Potamitis et al (2006) obtained a correct classification rate of 94% (identification of 105 species that belong to 6 subfamilies of North-Mexican crickets).…”
Section: Introductionmentioning
confidence: 99%
“…The sounds can be recorded with a microphone and can be analyzed in the time domain (Chesmore, 2001) or in the frequency domain (Potamitis et al, 2006). Chesmore (2001) reported a correct classification rate of 99.4% under low noise conditions (experiment with 25 British Orthoptera Species). Potamitis et al (2006) obtained a correct classification rate of 94% (identification of 105 species that belong to 6 subfamilies of North-Mexican crickets).…”
Section: Introductionmentioning
confidence: 99%
“…Whilst normalisation of sequence length mitigates this effect at a gross level, it is still likely that there will be timing variations between the responses of different subjects within a class. In future work, we plan to address this issue by looking at the utility of feature-based encodings, such as time domain signal coding [3], which are less sensitive to scale. VI.…”
Section: Resultsmentioning
confidence: 99%
“…The feature extraction block is constructed using twelve subblocks that compute six feature components from the lower subband and six others from the upper subband according to Equation (4). The first component of the lower subband (x m,1 ) and the first component of the upper subband (x m,7 ) are described in Figure 5.…”
Section: Feature Extractionmentioning
confidence: 99%
“…Analysis of underwater bioacoustic signal has been the object of numerous publications, where several approaches have been proposed to recognize and classify these signals [1][2][3][4][5][6][7][8][9][10][11][12][13][14]. Much like birds, marine mammals are highly vocalizing animals and different species can be recognized by their specific sounds [15].…”
Section: Introductionmentioning
confidence: 99%