Distributed text-based communications (e.g., chat, instant-messaging) are facing the growing problem of malicious "chatbots" or "chatterbots" (automated communication programs posing as humans) attempting social engineering, gathering intelligence, mounting phishing attacks, spreading malware and spam, and threatening the usability and security of collaborative communication platforms. We provide supporting evidence for the suggestion that gross communication and behavioral patterns (e.g., message size, inter-message delays) can be used to passively distinguish between humans and chatbots. Further, we discuss several potential interrogation strategies for users and chat room administrators who may need to actively distinguish between a human and a chatbot, quickly and reliably, during distributed communication sessions. Interestingly, these issues are in many ways analogous to the identification problem faced by interrogators in a Turing Test, and the proposed methods and strategies might find application to and inspiration from this topic as well.