The growth of Big Data, memory wall and power wall are posing unprecedented demand for Processing In Memory (PIM). A computational memory architecture supporting in bit-Line processing can be a major key for PIM to eliminate the overhead of moving data from processing unit to memory and vice versa. It promises high bandwidth, massive parallelism, and high energy efficiency. The existing PIM approaches concentrates mostly on near-memory processing (NMP) and/or in-memory processing (IMP). The Compute-line based Computational Memory Architecture (CCMA), or simply (multiple) compute-lines (CLs), represents a different way of approaching in-memory processing (IMP). A compute-line represents a line that carries fine-grained operations using connected memory cells. CL is based on (a selection of) a bit-line for processing elementary logical operations and a bit-line Keeper (KEEPER) for enforcing and stabilizing the outcome results. CCMA is backward compatible with the conventional Static Random Access Memory (SRAM) and can be used for state storing. In contrary to the conventional SRAM, it eliminates the need to pre-charge and sensing bit-line(s) for read and write operations which reduces bit-line activities and support in-place combinational logic which reduces data transfer latency. It introduces a considerable potential to reduce bandwidth and energy consumption by eliminating overhead of data movement when used as an in-memory computing. Moreover, it can easily support any specific interconnect topology between multiple compute-lines for parallel applications by hard wiring their input/output interfaces during chip fabrication. The CCMA designs the KEEPER circuitry so that, in one (or two) clock cycle(s) and through bit-line selection, its can multi-row read bit information from participating memory cells, bitwise logic compute selected operation and multi-row write to targeted memory cells. In this work, toward deep investigation of the CCMA architectures and perspective remarks, CL's capabilities and statistical analysis of running in-place logic operations are presented and showed potential computational and global energy savings.