Deep neural networks have achieved increasingly accurate results on a wide variety of complex tasks. However, much of this improvement is due to the growing use and availability of computational resources (e.g use of GPUs, more layers, more parameters, etc). Most state-of-the-art deep networks, despite performing well, over-parameterize approximate functions and take a significant amount of time to train. With increased focus on deploying deep neural networks on resource constrained devices like smart phones, there has been a push to evaluate why these models are so resource hungry and how they can be made more efficient. This work evaluates and compares three distinct methods for deep model compression and acceleration: weight pruning, low rank factorization, and knowledge distillation. Comparisons on VGG nets trained on CIFAR10 show that each of the models on their own are effective, but that the true power lies in combining them. We show that by combining pruning and knowledge distillation methods we can create a compressed network 85 times smaller than the original, all while retaining 96% of the original model's accuracy.
Children as authors and creators need to be supported at all stages of literacy development. This paper presents Picture-Blocks (PB)-a constructionist mobile app that allows children (ages 5-9) to create personally meaningful digital pictures while exploring spelling and vocabulary concepts in an openended manner. In PB, children can spell any number of picture objects (sprites) into existence, that they can use to make a picture composition and share with friends. PB also suggests semantically similar sprites, allowing children to explore related objects and discover new words. We evaluated the app by running an exploratory pilot with 14 children over a two weeks in-the-wild deployment. Qualitative and quantitative examples suggest that our design of the visual scaffolding interactions facilitated (i) high engagement and a sense of authorship via created pictures, (ii) instances of spelling corrections and vocabulary explorations (iii) digitally mediated social interaction and remixing. We present our findings of children's interactions and creations, while discussing implications for designers and developers of literacy technologies.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.