An artificial intelligence (AI) processor is a promising solution for energy-efficient data processing, including health monitoring and image/voice recognition. However, data movements between compute part and memory induce memory wall and power wall challenges to the conventional computing architecture. Recently, the memory-centric architecture has been revised to solve the data movement issue, where the memory is equipped with the compute-capable memory technique, namely, computing-in-memory (CIM). In this paper, we analyze the requirement of AI algorithms on the data movement and low power requirement of AI processors. In addition, we introduce the story of CIM and implementation methodologies of CIM architecture. Furthermore, we present several novel solutions beyond traditional analog-digital mixed static random-access memory (SRAM)-based CIM architecture. Finally, recent CIM tape-out studies are listed and discussed.