2020
DOI: 10.1109/jetcas.2020.3023481
|View full text |Cite
|
Sign up to set email alerts
|

On the Distribution of Clique-Based Neural Networks for Edge AI

Abstract: Distributed smart sensors are more and more used in applications such as biomedical or domestic monitoring. However, each sensor broadcasts data wirelessly to the others or to an aggregator, which leads to energy-hungry sensor nodes not ensuring data privacy. To tackle both challenges, this work proposes to distribute the feature extraction and a part of a clique-based neural network (CBNN) in each sensor node. This scheme allows standardizing data at the sensor level, ensuring privacy if the data is intercept… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
6
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
3
1
1

Relationship

2
3

Authors

Journals

citations
Cited by 5 publications
(6 citation statements)
references
References 17 publications
0
6
0
Order By: Relevance
“…The total number of bits transmitted by a sensor node is log 2 (N C ) + log 2 (N N ). In terms of power consumption and latency, this transmission scheme is more efficient than computing all the sensor features with the whole CBNN in the aggregator, for a value of N N greater than 27 [4]. Moreover, the embedded memory depicted in Fig.…”
Section: A Cbnn Behavior and Distributionmentioning
confidence: 99%
See 3 more Smart Citations
“…The total number of bits transmitted by a sensor node is log 2 (N C ) + log 2 (N N ). In terms of power consumption and latency, this transmission scheme is more efficient than computing all the sensor features with the whole CBNN in the aggregator, for a value of N N greater than 27 [4]. Moreover, the embedded memory depicted in Fig.…”
Section: A Cbnn Behavior and Distributionmentioning
confidence: 99%
“…The static energy consumption of the global memory is 20 times more important than that of the associative operation itself, and its silicon area increases dramatically the circuit surface. Distributing the memory mitigates these drawbacks, which is not demonstrated in [4] because the memory is implemented externally on a FPGA. Globally, for a distribution in 16 sensor nodes, the energy consumption normalized to the embedded memory size is reduced in this work by a factor of 6.5.…”
Section: B State Of the Art Of Associative Memories For Wbanmentioning
confidence: 99%
See 2 more Smart Citations
“…These models have been further developed in the literature (Aliabadi, Berrou, Gripon, & Jiang, 2014;Boguslawski, Gripon, Seguin, & Heitzmann, 2014;Jarollahi, Onizawa, Gripon, & Gross, 2014;Jarollahi, Gripon, Onizawa, & Gross, 2015;Jiang, Marques, Kirsch, & Berrou, 2015;Jiang, Gripon, Berrou, & Rabbat, 2016;Mofrad, Ferdosi, Parker, & Tadayon, 2015;Mofrad, Parker, Ferdosi, & Tadayon, 2016;Mofrad & Parker, 2017;Berrou & Kim-Dufor, 2018) and used in many applications, such as solving feature correspondence problems (Aboudib, Gripon, & Coppin, 2016), devising low-power, contentaddressable memory (Jarollahi et al, 2015), oriented edge detection in image (Danilo et al, 2015), image classification with convolutional neural networks (Hacene, Gripon, Farrugia, Arzel, & Jezequel, 2019), and finding all matches of a probe in a database (Hacene, Gripon, Farrugia, Arzel, & Jezequel, 2017), to mention a few. Furthermore, they were implemented on a general-purpose graphical processing unit (GPU) (Yao, Gripon, & Rabbat, 2014), in 65-nm CMOS (Larras, Chollet, Lahuec, Seguin, & Arzel, 2018), and in distributed smart sensor architectures (Larras & Frappé, 2020). Therefore, CCN models can be referred to as an important brain-inspired memory system (Berrou, Dufor, Gripon, & Jiang, 2014) that became a basis for a wide range of research in associative memory models.…”
Section: Introductionmentioning
confidence: 99%