The line density direction (LDD) feature, gradient direction (GD) feature, and deep convolution neural network-based (convNet-based) feature widely employed in handwritten character sensing recognition have acceptable accuracies. The convNet-based method determines feature expression not only from a raw pattern image but also from domain-specific LDD and GD knowledge. These methods are named convNet-based-Raw, convNet-based-LDD, and convNet-based-GD, respectively. In this paper, we present an independent sensing analysis of the five features under identical working conditions considering the preprocessing and algorithm implementation of two handwritten character databases: CASIA-HWDB1.0 (Chinese) and TUAT HANDS (Japanese). The experimental results demonstrate that convNet-based feature extraction is more robust and discriminating than LDD and GD, two traditional methods for both handwritten Chinese character recognition (HCCR) and handwritten Japanese character recognition (HJCR). Furthermore, the convNet-based-GD has the highest accuracy for both HCCR and HJCR among the three convNet-based feature extraction methods. Compared with the traditional methods, LDD and GD, the best accuracies when using convNet-based-GD are improved by 3.04 and 2.31% for HCCR, and 3.15 and 1.54% for HJCR, respectively. Similarly, compared with the two other convNet-based methods, convert-based-Raw and content-based-LDD, the best accuracies are improved by 0.44 and 0.25% for HCCR, and 0.65 and 0.08% for HJCR, respectively. Experimental comparisons of sensing analysis results are acceptable and valuable.