The surge in advancements in large language models (LLMs) has expedited the generation of synthetic text imitating human writing styles. This, however, raises concerns about the potential misuse of synthetic textual data, which could compromise trust in online content. Against this backdrop, the present research aims to address the key challenges of detecting LLMs-generated texts. In this study, we used ChatGPT (v 3.5) because of its widespread and capability to comprehend and keep conversational context, allowing it to produce meaningful and contextually suitable responses. The problem revolves around the task of discerning between authentic and artificially generated textual content. To tackle this problem, we first created a dataset containing both real and DeepFake text. Subsequently, we employed transfer-learning (TL) and conducted DeepFake-detection utilizing SOTA large pre-trained LLMs. Furthermore, we conducted validation using benchmark datasets comprising unseen data samples to ensure that the model's performance reflects its ability to generalize to new data. Finally, we discussed this study's theoretical contributions, practical implications, limitations and potential avenues for future research, aiming to formulate strategies for identifying and detecting large-generative-models’ produced texts. The results were promising, with accuracy ranging from 94% to 99%. The comparison between automatic detection and the human ability to detect DeepFake text revealed a significant gap in the human capacity for its identification, emphasizing an increasing need for sophisticated automated detectors. The investigation into AI-generated content detection holds central importance in the age of LLMs and technology convergence. This study is both timely and adds value to the ongoing discussion regarding the challenges associated with the pertinent theme of "DeepFake text detection", with a special focus on examining the boundaries of human detection.