This study proposes a lie detection module that leverages audio features in both the time and frequency domains to scrutinize key features and speech patterns for more precise dishonesty detection. The research focuses on addressing the limitations of traditional methods and suggests practical alternatives, aiming to contribute to the improvement of existing understanding and system architectures in the field of deception detection. where Detecting deception is critical in various fields, such as law enforcement, national security, and personal relationships. While traditional methods like polygraph exams are criticized for their reliability, emerging real-time technologies like voice stress analysis and speech processing techniques present alternatives. This research aims to enhance lie detection systems by exploring the potential of audio features, offering a more nuanced approach to identifying dishonesty, and overcoming the limitations of current methods. The study introduces a lie detection algorithm utilizing a real-world dataset collected with a handheld microphone, replicating authentic situations. Feature selection employs the random forest technique, focusing on the most significant features for training and testing. The algorithm achieved a 79% accuracy rate through rigorous examination, with Mel-frequency cepstral coefficients (MFCC) identified as the crucial feature for lie detection. This underscores the method's potential effectiveness in real-time fraud detection. However, further research is necessary to confirm its consistency across various datasets, situations, and demographic groups.