In this paper, we propose a novel layer based on fast Walsh-Hadamard transform (WHT) and smooththresholding to replace 1 × 1 convolution layers in deep neural networks.In the WHT domain, we denoise the transform domain coefficients using the new smooththresholding non-linearity, a smoothed version of the wellknown soft-thresholding operator. We also introduce a family of multiplication-free operators from the basic 2×2 Hadamard transform to implement 3 × 3 depthwise separable convolution layers. Using these two types of layers, we replace the bottleneck layers in MobileNet-V2 to reduce the network's number of parameters with a slight loss in accuracy. For example, by replacing the final third bottleneck layers, we reduce the number of parameters from 2.270M to 947K. This reduces the accuracy from 95.21% to 92.88% on the CIFAR-10 dataset. Our approach significantly improves the speed of data processing. The fast Walsh-Hadamard transform has a computational complexity of O(m log 2 m). As a result, it is computationally more efficient than the 1 × 1 convolution layer. The fast Walsh-Hadamard layer processes a tensor in R 10×32×32×1024 about 2 times faster than 1 × 1 convolution layer on NVIDIA Jetson Nano computer board.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.