This study examined the effects of automation reliability and multi-tasking on trust and reliance in a simulated unmanned system scenario. Participants performed an insurgent search task with the help of an automated aid that provided information about targets with varying levels of reliability (high, medium, and low). In addition, a multi-tasking condition was implemented in which a radio communication assignment designed to increase cognitive demand was performed. Results indicated that participants were not able to accurately assess the true reliability of the automated aid in any condition, and were unable to discriminate between low and medium reliability. Results from the multi-tasking manipulation show that participants were more reliant upon the automated aid when the secondary task was present. Overall, this study provides insight into the patterns of trust calibration errors that may negatively affect performance in human-machine teams, particularly when additional task pressure is present.