Machine learning (ML) systems that use sensors obtain observations from sensors and use them to recognize and interpret the current situation. These systems are susceptible to sensor-based adversarial example attacks (AEs); if some sensors are vulnerable and can be compromised by an attacker, the attacker can change the output of the system by changing the values of the sensors. The detection of compromised sensors is important to defend the system against sensor-based AEs, because we can check the sensors and replace them by detecting the sensors used by the attacker. In this paper, we propose a method to detect the sensors used in sensor-based AEs by utilizing the features of the attack that cannot be avoided. In this method, we introduced a model called the feature-removable model (FRM), which allows us to select the features used as inputs into the model. Our method detects the sensors used in sensor-based AEs by finding the inconsistencies between the outputs of the FRM obtained by changing the selected features. We evaluated our method using a human activity recognition model with sensors attached to the user’s chest, wrist, and ankle. We demonstrate that our method can accurately detect sensors used by the attacker and achieves an average Recall of Detection of 0.92, and the average Precision of Detection is 0.72.