ISLPED '05. Proceedings of the 2005 International Symposium on Low Power Electronics and Design, 2005. 2005
DOI: 10.1109/lpe.2005.195519
|View full text |Cite
|
Sign up to set email alerts
|

Power reduction by varying sampling rate

Abstract: The rate at which a digital signal processing (DSP) system operates depends on the highest frequency component in the input signal. DSP applications must sample their inputs at a frequency at least twice the highest frequency in the input signal (i.e., the Nyquist rate) to accurately reproduce the signal. Typically a fixed sampling rate, guaranteed to always be high enough, is used. However, an input signal may have periods when the signal has little high frequency content as well as periods of silence. When t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2015
2015
2020
2020

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(9 citation statements)
references
References 18 publications
0
9
0
Order By: Relevance
“…Moreover, the algorithms were validated while downsampling the signals and still have shown an excellent performance. It has been reported that lower sampling frequency allows to scale down CPU speed, thus, reducing the power consumption [34]. As shown in Table 2, our study has performed at lowest sampling frequency among studies that used accelerometers or IMU sensors.…”
Section: Discussionmentioning
confidence: 96%
“…Moreover, the algorithms were validated while downsampling the signals and still have shown an excellent performance. It has been reported that lower sampling frequency allows to scale down CPU speed, thus, reducing the power consumption [34]. As shown in Table 2, our study has performed at lowest sampling frequency among studies that used accelerometers or IMU sensors.…”
Section: Discussionmentioning
confidence: 96%
“…(41) According to Theorems 3 and 4, the MDL mechanism is strategyproof. Note that the objective function in (19) is convex with respect to L M = (x M , y M ). Hence, for each data sample (x, y), we can efficiently compute the optimal solution L * M = (x * M , y * M ) to minimize the SC platform's crowdsourcing cost in (19) without considering strategyproofness and then use it as the label.…”
Section: B Deep Learning Based Mobile Bs Deployment Mechanismmentioning
confidence: 99%
“…Note that the objective function in (19) is convex with respect to L M = (x M , y M ). Hence, for each data sample (x, y), we can efficiently compute the optimal solution L * M = (x * M , y * M ) to minimize the SC platform's crowdsourcing cost in (19) without considering strategyproofness and then use it as the label. In the training process, we adopt the mean squared error (MSE) to evaluate the training loss and optimize the deep neural network parameters.…”
Section: B Deep Learning Based Mobile Bs Deployment Mechanismmentioning
confidence: 99%
“…However, although halving the sampling frequency (clock rate) of a transistor cuts its power consumption in half (e.g. Dieter et al 2005), it may also decrease the quality recorded physiological signal, thereby adding a source of error during HRV outcome measure calculation.…”
Section: Implications For the Hrv Literaturementioning
confidence: 99%