2022
DOI: 10.48550/arxiv.2206.10526
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

QuantFace: Towards Lightweight Face Recognition by Synthetic Data Low-bit Quantization

Abstract: Deep learning-based face recognition models follow the common trend in deep neural networks by utilizing full-precision floating-point networks with high computational costs. Deploying such networks in use-cases constrained by computational requirements is often infeasible due to the large memory required by the full-precision model. Previous compact face recognition approaches proposed to design special compact architectures and train them from scratch using real training data, which may not be available in a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
5
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(5 citation statements)
references
References 38 publications
0
5
0
Order By: Relevance
“…The 2-bit network also sees an improvement after QAT, but clearly has to sacrifice accuracy for the smaller weight representation. The results shown in the work of QuantFace [9], show a better implementation of MobileFaceNet using the Pytorch framework and a different training dataset. The authors mention that their method did not converge for a 4-bit implementation which is achieved with this work.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…The 2-bit network also sees an improvement after QAT, but clearly has to sacrifice accuracy for the smaller weight representation. The results shown in the work of QuantFace [9], show a better implementation of MobileFaceNet using the Pytorch framework and a different training dataset. The authors mention that their method did not converge for a 4-bit implementation which is achieved with this work.…”
Section: Resultsmentioning
confidence: 99%
“…Unfortunately, the quantization of an efficient neural network is a relatively unexplored area in the field of face recognition. Roughly at the same time of the publishing of this paper QuantFace [9] was introduced and seems to show promising results. Quantization significantly reduces the total solution space by reducing the number of representation spaces per feature.…”
Section: Introductionmentioning
confidence: 99%
“…The other is the introduction of novel architectures, such as ResNets [13], that pushed the limits of the State-of-the-Art. Finally, an unprecedented proliferation of the internet led to remarkable growth in the data available and how it can be collected [14], with recent works aiming at replacing such data with privacy-friendly synthetic data [15], [16], [17].…”
Section: Related Work a Face Recognitionmentioning
confidence: 99%
“…Currently, CNNs are more frequently used than traditional feature-extraction methods for face recognition, as they can solve common related issues such as changes in facial expressions, illumination, poses, low resolution, and occlusion [ 1 ]. CNNs are commonly built with complex architectures and high computational costs [ 4 ], with examples such as DeepFace [ 5 ], FaceNet [ 6 ], ArcFace [ 7 ], and MagFace [ 8 ]. Due to the huge amount of memory that these methods require, their applications are not designed to work in real-time on embedded devices with limited resources [ 4 , 9 ].…”
Section: Introductionmentioning
confidence: 99%
“…CNNs are commonly built with complex architectures and high computational costs [ 4 ], with examples such as DeepFace [ 5 ], FaceNet [ 6 ], ArcFace [ 7 ], and MagFace [ 8 ]. Due to the huge amount of memory that these methods require, their applications are not designed to work in real-time on embedded devices with limited resources [ 4 , 9 ]. Therefore, lightweight CNN architectures have arisen that cover some of the mentioned requirements [ 9 ].…”
Section: Introductionmentioning
confidence: 99%