The high volumes of data stored in the cloud, coupled with growing concerns about security and privacy, have motivated research on homomorphic encryption (HE), i.e., a technique that enables computation directly on encrypted data, obviating the need for prior decryption. Recent algorithmic advances have enabled a diverse set of homomorphic operations (e.g., addition, multiplication, and division). Looking at the applications, recent work also suggests extensibility secure, homomorphically encrypted content-addressable memories [or secure content-addressable memories (SCAMs)]. Still, the large datawords that result from homomorphic data encodings (i.e., that must be stored/transferred for computation), compounded with the implicit computational complexity of HE, still impede the deployment of homomorphic computer hardware. As an alternative, computing-in-memory (CiM) architectures could significantly reduce the volume of data transfers for SCAM (and other) applications, leading to considerable energy savings and latency reduction. In this regard, we propose a CiM-compatible engine for SCAM (CiM-SCAM) and analyze the pros and cons of three different memory cells: a 6T CMOS SRAM and two memory cells based on ferroelectric field-effect transistors (FeFETs) (specifically 2T + 1FeFET and 1-FeFET designs). CiM-SCAM leverages in-place copy buffers (IPCBs), along with customized sense amplifiers that include two types of in-memory adders. Our results suggest that energy (and search time) improvements of up to 16× (3.2×) for 1-FeFET memory cells are possible, compared with an application-specific integrated circuit (ASIC) approach. Similar improvements are also possible with SRAM and 2T + 1-FeFET memory cells. For the latter, we achieve up to 13× (3.1×) of energy (speedup). INDEX TERMS Computing-in-memory (CiM), emerging technologies, ferroelectric field-effect transistors (FeFETs), homomorphic encryption (HE), secure content-addressable memory (CAM), SRAMs.