2007
DOI: 10.1109/ijcnn.2007.4371111
|View full text |Cite
|
Sign up to set email alerts
|

Compact hardware for real-time speech recognition using a Liquid State Machine

Abstract: Hardware implementations of Spiking Neural Networks are numerous because they are well suited for implementation in digital and analog hardware, and outperform classic neural networks. This work presents an application driven digital hardware exploration where we implement realtime, isolated digit speech recognition using a Liquid State Machine (a recurrent neural network of spiking neurons where only the output layer is trained). First we test two existing hardware architectures, but they appear to be too fas… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
26
0

Year Published

2007
2007
2024
2024

Publication Types

Select...
7
1
1

Relationship

2
7

Authors

Journals

citations
Cited by 27 publications
(26 citation statements)
references
References 20 publications
(31 reference statements)
0
26
0
Order By: Relevance
“…For instance, by lowering the timescale of the reservoir when the input is slowly varying (when the robot drives in a straight line along the main corridor) and increasing this timescale back otherwise, the performance can greatly be enhanced for long delay periods because the memory of the reservoir is increased with this new scheme [15]. These ideas of working with the timescale of reservoirs can find applications in other areas such as speech recognition [20], [21]. We also plan to validate the current work on a real robotic setup using the mobile robot e-puck [17].…”
Section: Discussionmentioning
confidence: 99%
“…For instance, by lowering the timescale of the reservoir when the input is slowly varying (when the robot drives in a straight line along the main corridor) and increasing this timescale back otherwise, the performance can greatly be enhanced for long delay periods because the memory of the reservoir is increased with this new scheme [15]. These ideas of working with the timescale of reservoirs can find applications in other areas such as speech recognition [20], [21]. We also plan to validate the current work on a real robotic setup using the mobile robot e-puck [17].…”
Section: Discussionmentioning
confidence: 99%
“…The reservoir is composed of 400 sigmoidal nodes, scaled to a spectral radius 2 of |λ max | = 0.9 [6], which approximately sets the reservoir at the edge of stability. The readout layer has 1 output unit which corresponds The original dataset collected from the simulator is downsampled 3 by a factor of 100, which is equivalent to slowing down the reservoir time scale [9,16]. This is because the robot has a relatively constant low velocity, taking about 1,300 timesteps to go from the start position to the goal in environment E1 (Fig.…”
Section: Modeling a Controller With Long-term Memory: The Road-sign Pmentioning
confidence: 99%
“…The memory of the reservoir can become greater by increasing the resampling rate [16], or by slowing down the reservoir dynamics (which is equivalent to adding leaky integrators) [9]. However, there are limitations for slowing down the dynamics since the reservoir still needs to be fast enough to generate the turning movement into the narrow corridors.…”
Section: Modeling a Controller With Long-term Memory: The Road-sign Pmentioning
confidence: 99%
“…Several chapters in [99] are dedicated to this subject, and more recent work can be found for instance in [170,53,77,33,125,116,150].…”
Section: Implementing Snnsmentioning
confidence: 99%