Robot assistants and wearable devices are highly useful; however, these artificial systems are susceptible to hackers. In this article, two sets of experiments were conducted. The first part of this study simulated a malicious attack on a prosthetic arm system to adversely affect the operation of the prosthetic system, while the perception of 10 human subjects was surveyed. These 10 able-bodied subjects controlled the prosthetic arm and hand with electromyogram signals, while an artificial sensation of touch was conveyed to their arms as they operated the system, which enabled them to feel what the prosthetic hand was grasping as they were asked to transport an object from one location to another. This haptic feedback was provided in both the normal and abnormal operational modes but was disabled in the extremely abnormal mode. The electromyogram control signals for the arm were reversed in both the abnormal and extremely abnormal modes. Results from the simulated malicious attack on a prosthetic arm system showed that the subjects found the haptic feedback helpful in both the normal and abnormal modes of operation. Both the abnormal and extremely abnormal modes of operation negatively impacted the self-reported levels of trust, satisfaction, and frustration with the prosthetic system as the subjects grasped and transported an object. While these metrics were negatively impacted by system malfunctions resembling a malicious attack on the control functionality, it was possible to rebuild them to their former higher levels after the functionality of the prosthetic system was restored. A parallel study in this article involved simulating a malicious attack on a robot assistant to unfavorably affect the delivery operation modes, while the perception of 20 human subjects was surveyed. Results showed that the simulated malfunctions unfavorably impacted the perception of trust, satisfaction, and frustration, but it was possible to restore these metrics in two different ways as the device functionality was restored.