Objective This research examined the effects of reliability and stated social intent on trust, trustworthiness, and one’s willingness to endorse use of an autonomous security robot (ASR). Background Human–robot interactions in the domain of security is plausible, yet we know very little about what drives acceptance of ASRs. Past research has used static images and game-based simulations to depict the robots versus actual humans interacting with actual robots. Method A video depicted an ASR interacting with a human. The ASR reviewed access credentials and allowed entrance once verified. If the ASR could not verify one’s credentials it instructed the visitor to return to the security checkpoint. The ASR was equipped with a nonlethal device and the robot used this device on one of the three visitors (a research confederate). Manipulations of reliability and stated social intent of the ASR were used in a 2 × 4 between subjects design ( N = 320). Results Reliability influenced trust and trustworthiness. Stated social intent influenced trustworthiness. Participants reported being more favorable toward use of the ASR in military contexts versus public contexts. Conclusion The study demonstrated that reliability of the ASR and statements regarding the ASR’s stated social intent are important considerations influencing the trust process (inclusive of intentions to be vulnerable and trustworthiness perceptions). Application If robotic systems are authorized to use force against a human, public acceptance may be increased with availability of the intent-based programming of the robot and whether or not the robot’s decision was reliable.