Look-up table (LUT) classifiers are often used to construct concise classifiers for rapid object detection due to their favorable convergent ability. However, their poor generalization ability imposes restrictions on their applications. A novel improvement to LUT classifiers is proposed in this paper where the new confident of each partition is recalculated by smoothing the old ones within its neighbor partitions. The new confidents are more generalizable than the old ones because each of the new predicts is supported by more training samples and the high frequency components in the old predict sequences are suppressed greatly through smoothing operation. Both weight sum smoothing method and confident smoothing method are introduced here, which all bring negligible extra computation cost in training time and no extra cost in test time. Experimental results in the domain of upright frontal face detection using smoothed LUT classifiers with identical smoothing width and smoothing factor for all partitions based on Haar-like rectangle features show that smoothed LUT classifiers generalize much better and also converge more or less worse than unsmooth LUT classifiers. Specifically, smoothed LUT classifiers can delicately balance between generalization ability and convergent ability through carefully set smoothing parameters.
I. INTRODUCTIONRIGINATED from domain-partitioning confidencerated hypotheses [1], Look-up table (LUT) classifiers [5], [2], [17] (response binning classifiers) are often used to construct very concise classifiers through Real AdaBoost [1] for rapid object detection and classification due to their favorable convergent ability, such as face detection [2]-[4], [9]-[16], [19]-[21], gender classification [5], pedestrian detection [6], [22], [23], incident detection [7], etc. The LUT classifiers used in these papers are somewhat revised by the authors to meet their needs, such as using adaptive (much less) partitions for resource-constrained devices [9], discarding negative responses and adding global predicts [7], online adjusting the confidents for online learning [10], using 0-1 version LUT classifiers to meet the needs of Discrete AdaBoost other than Real AdaBoost [5], applying Bayesian error based LUT classifiers for higher performance [4], etc. Besides, various criterions are used to choose optimal LUT classifiers to form strong classifiers, such as minimizing Bhattacharyya distance [1], minimizing Kullback-Leibler divergence [19], minimizing symmetric Jensen-Shannon divergence [12], etc. However, the poor generalization ability of LUT classifiers Manuscript received December 31, 2012 J Wen, corresponding author, is with the College