Meta-learning is a technique to transfer learning from a pre-built model on known tasks to build a model for unknown tasks. Graidentbased meta-learning algorithms are one such family that use the technique of gradient descent for model updates. These meta-learning architectures are hierarchical in nature and hence incur large training times, which are prohibitive for industries relying on models trained using the most recent data to make relevant predictions. To address these issues, we propose MetaFaaS, a function-as-aservice (FaaS) paradigm on public cloud to build a scalable and costperformance optimal deployment framework for gradient-based meta-learning architectures. We propose an analytical model to predict the cost and training time on cloud for a given workload. We validate our approach on multiple meta-learning architectures, (MAML, ANIL, ALFA) and attain a speed-up of over 5× in training time on FaaS. We also propose eALFA, a compute-efficient metalearning architecture, which achieves a speed-up of > 9× as compared to ALFA. We present our results with four quasi-benchmark datasets in meta-learning, namely, Omniglot, Mini-Imagenet (Imagenet), FC100 (CIFAR), and CUBirds200.
CCS CONCEPTS• Computing methodologies → Distributed algorithms.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.