To the human eye, AI-generated outputs of large language models have increasingly become indistinguishable from human-generated outputs. Therefore, to determine the linguistic properties that separate AI-generated text from human-generated text, we used a state-of-the-art chatbot, ChatGPT, and compared how it wrote hotel reviews (Study 1a; N = 1,200 total reviews) and news headlines (Study 1b; N = 900 total headlines) to human-generated counterparts across content (emotion), style (analytic writing, adjectives), and structural features (readability). Results were consistent across datasets: AI-generated text had a more analytic style and was more affective, more descriptive, and less readable than human-generated text. Classification accuracies of AI-generated vs. human-generated texts were between 66-86%, exceeding chance and typical human classification (~50%). Here, we argue AI-generated text is inherently deceptive when communicating personal experiences typical of humans, and differs from intentionally deceptive human-generated text at the language level. Implications for AI-Mediated Communication and deception research are discussed.