2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW) 2019
DOI: 10.1109/iccvw.2019.00323
|View full text |Cite
|
Sign up to set email alerts
|

VarGFaceNet: An Efficient Variable Group Convolutional Neural Network for Lightweight Face Recognition

Abstract: To improve the discriminative and generalization ability of lightweight network for face recognition, we propose an efficient variable group convolutional network called VarGFaceNet. Variable group convolution is introduced by VarGNet to solve the conflict between small computational cost and the unbalance of computational intensity inside a block. We employ variable group convolution to design our network which can support large scale face identification while reduce computational cost and parameters. Specifi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
70
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 86 publications
(72 citation statements)
references
References 29 publications
0
70
0
Order By: Relevance
“…Lightweight neural networks tailored specifically for the face recognition task have been proposed, such as Mo-bileFaceNet [13], ShuffleFaceNet [66], and VarGFaceNet [107].…”
Section: ) Lightweight Convolutional Neural Network For Face Recognitionmentioning
confidence: 99%
“…Lightweight neural networks tailored specifically for the face recognition task have been proposed, such as Mo-bileFaceNet [13], ShuffleFaceNet [66], and VarGFaceNet [107].…”
Section: ) Lightweight Convolutional Neural Network For Face Recognitionmentioning
confidence: 99%
“…Depending on depth of network and spatial resolution of features where such distillation is applied, it makes student to mimic teacher at different levels of abstraction. However, over-reguralization of hidden layers can lead to poor quality, so usually hints are only used for embedding (pre-classification) layer [16,21]. In order to successfully guide student even at initial layers, modification of hints idea was proposed in [13].…”
Section: Individual Knowledge Distillationmentioning
confidence: 99%
“…Some recent works [21,22] follow the idea of hint connections and impose constraints on the discrepancy between teacher's and student's embeddings. But in order to better fit angular nature of conventional losses used to train face recognition networks [28,29,31], authors put penalty on cosine similarity, instead of Euclidean distance.…”
Section: Knowledge Distillation For Face Recognitionmentioning
confidence: 99%
“…Convolutional Neural Networks combines three basic architectures, namely local receptive fields, shared weight in the form of filters, and spatial subsampling in the form of pooling. Convolution or what is commonly known as convolution is a matrix that functions to perform filters [5]. In the filtering process, there are two matrices, namely the input value matrix and the kernel matrix.…”
Section: Introductionmentioning
confidence: 99%