Researchers have attempted to control robotic hands and prostheses through biosignals but could not match the human hand. Surface electromyography records electrical muscle activity using non-invasive electrodes and has been the primary method in most studies. While surface electromyography-based hand motion decoding shows promise, it has not yet met the requirements for reliable use. Combining different sensing modalities has been shown to improve hand gesture classification accuracy. This work introduces a multimodal bracelet that integrates a 24-channel force myography system with six commercial surface electromyography sensors, each containing a six-axis inertial measurement unit. The device’s functionality was tested by acquiring muscular activity with the proposed device from five participants performing five different gestures in a random order. A random forest model was then used to classify the performed gestures from the acquired signal. The results confirmed the device’s functionality, making it suitable to study sensor fusion for intent detection in future studies. The results showed that combining all modalities yielded the highest classification accuracies across all participants, reaching 92.3±2.6% on average, effectively reducing misclassifications by 37% and 22% compared to using surface electromyography and force myography individually as input signals, respectively. This demonstrates the potential benefits of sensor fusion for more robust and accurate hand gesture classification and paves the way for advanced control of robotic and prosthetic hands.