Deep learning (DL) has been used for electromyographic (EMG) signal recognition and achieved high accuracy for multiple classification tasks. However, implementation in resourceconstrained prostheses and human-computer interaction devices remains challenging. To overcome these problems, this paper implemented a low-power system for EMG gesture and force level recognition using Zynq architecture. Firstly, a lightweight network model structure was proposed by Ultra-lightweight depth separable convolution (UL-DSC) and channel attention-global average pooling (CA-GAP) to reduce the computational complexity while maintaining accuracy. A wearable EMG acquisition device for real-time data acquisition was subsequently developed with size of 36 mm×28 mm×4 mm. Finally, a highly parallelized dedicated hardware accelerator architecture was designed for inference computation. 18 gestures were tested, including force levels from 22