This study embarks on a comprehensive analysis to evaluate the efficiency and scalability of in-memory computing (IMC) compared to traditional disk-based processing for big data analytics. Utilising the "New York City Taxi Trip Duration" dataset from Kaggle, we designed an experiment focusing on three critical analytical tasks: aggregation, sorting, and filtering. Our objective was to quantify the performance improvements offered by IMC, as facilitated by Apache Spark, against conventional SQL queries executed on a disk-based system. The findings reveal that IMC consistently outperforms disk-based processing in execution time, with significant reductions observed across all tasks. Specifically, the aggregation task highlighted the stark contrast in data retrieval speed, demonstrating IMC's superior efficiency with a completion time of 47.3 seconds, compared to 138.7 seconds for disk-based processing. Similar disparities were noted in sorting and filtering tasks, further accentuating IMC's performance advantage. Resource utilisation analysis, focusing on CPU and RAM consumption, indicated higher demands associated with IMC, underscoring the trade-off between enhanced speed and increased resource usage. The investigation provides a nuanced understanding of the practical implications of adopting IMC for big data analytics, especially considering the resource constraints of home computing environments. By juxtaposing theoretical advantages with empirical data, this paper contributes to the ongoing discourse on optimising data processing methodologies in the era of big data, offering insights into the balance between computational efficiency and resource management.