Application needs for big data processing are shifting from planned batch processing to emergent scenarios involving high elasticity. Consequently, for many organisations managing private or public cloud resources it is no longer wise to pre-provision big data frameworks over large xed-size clusters. Instead, they are looking forward to on-demand provisioning of those frameworks in the same way that the underlying compute resources such as virtual machines or containers can already be instantiated on demand today. Yet many big data frameworks, including the widely used Apache Spark, do not sandwich well in between underlying resource managers and user requests. With SLASH, we introduce a light-weight serverless provisioning model for worker nodes in standalone Spark clusters that help organisations slashing operating costs while providing greater exibility and comfort to their users and more sustainable operations based on a unique triple scaling method.