Recent breakthroughs in artificial intelligence and deep neural networks (DNNs) have produced an explosive demand for computing platforms equipped with customized domain-specific accelerators. However, DNN accelerators have security vulnerabilities. Researchers have previously explored DNN attack and defense technologies that mainly focus on training and inference algorithms or model structure robustness. The problem of how to design a secure accelerator architecture has received relatively little attention, especially with the rapid development of FPGA-based heterogeneous computing SoCs. To mitigate this bottleneck, we propose Nacc-Guard, a lightweight DNN accelerator architecture which can effectively defend against neural network bit-flip attacks and memory trojan attacks. By utilizing a linear randomization encryption algorithm based on stream cipher Trivium, interrupt signal confused coding, and hash-based message authentication code, Nacc-Guard can not only guarantee the integrity of the uploaded DNN file but also ensure buffer data confidentiality. To evaluate Nacc-Guard, NVDLA and a SIMD accelerator coupling with a RISC-V Rocket and ARM processor is implemented at RTL. Experimental evaluation shows that Nacc-Guard has a 3× hardware overhead reduction compared with conventional AES. Experiments on VGG, ResNet50, GoogLeNet, and YOLOv4-tiny validate that this framework can successfully ensure secure DNN inference with negligible performance loss. It achieves a 3.63× speedup and 35% energy reduction over the AES baseline.