Experiment-in-the-Loop Computing (EILC) requires support for numerous types of processing and the management of heterogeneous infrastructure over a dynamic range of scales: from the edge to the cloud and HPC, and intermediate resources. Serverless is an emerging service that combines high-level middleware services, such as distributed execution engines for managing tasks, with low-level infrastructure. It offers the potential of usability and scalability, but adds to the complexity of managing heterogeneous and dynamic resources. In response, we extend Pilot-Streaming to support serverless platforms. Pilot-Streaming provides a unified abstraction for resource management for HPC, cloud, and serverless, and allocates resource containers independent of the application workload removing the need to write resource-specific code. Understanding of the performance and scaling characteristics of streaming applications and infrastructure presents another challenge for EILC. StreamInsight provides insight into the performance of streaming applications and infrastructure, their selection, configuration and scaling behavior. Underlying StreamInsight is the universal scalability law, which permits the accurate quantification of scalability properties of streaming applications. Using experiments on HPC and AWS Lambda, we demonstrate that StreamInsight provides an accurate model for a variety of application characteristics, e. g., machine learning model sizes and resource configurations.