In this paper, we propose a low-power memorybased computing architecture, called selective computing architecture (SCA). It consists of multipliers and an LUT (Look-Up Table)-based component, that is multi-context ternary content-addressable memory (MC-TCAM). Either of them is selected by input-data conditions in neural-networks (NNs). Compared with quantized NNs, a higher accurate multiplication can be performed with low-power consumption in the proposed architecture. If input data stored in the MC-TCAM appears, the corresponding multiplication results for multiple weights are obtained. The MC-TCAM stores only shorter length of input data, resulting in achieving a low-power computing. The performance of the SCA is determined by three physical parameters concerning the configuration of MC-TCAM. The power dissipation of the target NN can be minimized by exploring these parameters in the design space. The hardware based on the proposed architecture is evaluated using TSMC 65 nm CMOS technology and MTJ model. In the case of speech command recognition, the power consumption at the multiplication of the first convolutional layer in a convolutional NN is reduced by 67% compared to the solution relying only on multipliers.